Test Report: Docker_Windows 22168

                    
                      9b787847521167b42f6debd67da4dc2d018928d7:2025-12-17:42812
                    
                

Test fail (35/427)

Order failed test Duration
61 TestForceSystemdFlag 53.11
67 TestErrorSpam/setup 48.12
169 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy 519.16
171 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart 374.74
173 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods 53.54
183 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd 54.31
184 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly 54.23
185 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig 741.29
186 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth 54.41
189 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService 20.2
195 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd 5.29
199 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect 124.25
201 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim 243.4
205 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL 23.84
211 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels 52.61
216 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp 0.11
217 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List 0.47
218 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput 0.5
219 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS 0.54
221 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel 0.52
224 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup 20.2
225 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format 0.52
226 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL 0.46
232 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell 2.8
360 TestKubernetesUpgrade 876.66
460 TestStartStop/group/no-preload/serial/FirstStart 540.76
488 TestStartStop/group/newest-cni/serial/FirstStart 522.78
497 TestStartStop/group/no-preload/serial/DeployApp 5.49
498 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 117.87
501 TestStartStop/group/no-preload/serial/SecondStart 378.19
503 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 121.36
506 TestStartStop/group/newest-cni/serial/SecondStart 381.39
507 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 545.37
511 TestStartStop/group/newest-cni/serial/Pause 13.3
512 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 215.34
x
+
TestForceSystemdFlag (53.11s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-158600 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker
docker_test.go:91: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p force-systemd-flag-158600 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker: exit status 85 (51.632151s)

                                                
                                                
-- stdout --
	* [force-systemd-flag-158600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "force-systemd-flag-158600" primary control-plane node in "force-systemd-flag-158600" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Stopping node "force-systemd-flag-158600"  ...
	* Powering off "force-systemd-flag-158600" via SSH ...
	* Deleting "force-systemd-flag-158600" in docker ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:36:41.391587    2932 out.go:360] Setting OutFile to fd 2040 ...
	I1217 01:36:41.466579    2932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:36:41.466579    2932 out.go:374] Setting ErrFile to fd 1240...
	I1217 01:36:41.466579    2932 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:36:41.484583    2932 out.go:368] Setting JSON to false
	I1217 01:36:41.487582    2932 start.go:133] hostinfo: {"hostname":"minikube4","uptime":6989,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:36:41.488576    2932 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:36:41.493583    2932 out.go:179] * [force-systemd-flag-158600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:36:41.499579    2932 notify.go:221] Checking for updates...
	I1217 01:36:41.503581    2932 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:36:41.505581    2932 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:36:41.510583    2932 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:36:41.514586    2932 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:36:41.517580    2932 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:36:41.521591    2932 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:36:41.648587    2932 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:36:41.652583    2932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:36:41.970031    2932 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-17 01:36:41.944866588 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:36:41.980034    2932 out.go:179] * Using the docker driver based on user configuration
	I1217 01:36:41.984034    2932 start.go:309] selected driver: docker
	I1217 01:36:41.984034    2932 start.go:927] validating driver "docker" against <nil>
	I1217 01:36:41.984034    2932 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:36:42.041027    2932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:36:42.438716    2932 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:57 OomKillDisable:true NGoroutines:82 SystemTime:2025-12-17 01:36:42.390883063 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:36:42.438716    2932 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 01:36:42.439721    2932 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 01:36:42.448731    2932 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:36:42.455719    2932 cni.go:84] Creating CNI manager for ""
	I1217 01:36:42.455719    2932 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:36:42.455719    2932 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:36:42.455719    2932 start.go:353] cluster config:
	{Name:force-systemd-flag-158600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-158600 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:36:42.460713    2932 out.go:179] * Starting "force-systemd-flag-158600" primary control-plane node in "force-systemd-flag-158600" cluster
	I1217 01:36:42.462719    2932 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:36:42.467715    2932 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:36:42.469722    2932 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1217 01:36:42.469722    2932 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:36:42.469722    2932 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
	I1217 01:36:42.469722    2932 cache.go:65] Caching tarball of preloaded images
	I1217 01:36:42.469722    2932 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:36:42.469722    2932 cache.go:68] Finished verifying existence of preloaded tar for v1.34.2 on docker
	I1217 01:36:42.470725    2932 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-158600\config.json ...
	I1217 01:36:42.470725    2932 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\force-systemd-flag-158600\config.json: {Name:mkbd2d967fb12ec37ce03a9ee1af862588329c87 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:36:42.556729    2932 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:36:42.556729    2932 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:36:42.556729    2932 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:36:42.556729    2932 start.go:360] acquireMachinesLock for force-systemd-flag-158600: {Name:mkee988271dfb6318f40bf08ac96fd6342fbea6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:36:42.556729    2932 start.go:364] duration metric: took 0s to acquireMachinesLock for "force-systemd-flag-158600"
	I1217 01:36:42.556729    2932 start.go:93] Provisioning new machine with config: &{Name:force-systemd-flag-158600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:force-systemd-flag-158600 Namespace:default APIServer
HAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Static
IP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:36:42.556729    2932 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:36:42.560718    2932 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:36:42.560718    2932 start.go:159] libmachine.API.Create for "force-systemd-flag-158600" (driver="docker")
	I1217 01:36:42.560718    2932 client.go:173] LocalClient.Create starting
	I1217 01:36:42.560718    2932 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:36:42.560718    2932 main.go:143] libmachine: Decoding PEM data...
	I1217 01:36:42.560718    2932 main.go:143] libmachine: Parsing certificate...
	I1217 01:36:42.560718    2932 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:36:42.561721    2932 main.go:143] libmachine: Decoding PEM data...
	I1217 01:36:42.561721    2932 main.go:143] libmachine: Parsing certificate...
	I1217 01:36:42.565723    2932 cli_runner.go:164] Run: docker network inspect force-systemd-flag-158600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:36:42.615720    2932 cli_runner.go:211] docker network inspect force-systemd-flag-158600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:36:42.620714    2932 network_create.go:284] running [docker network inspect force-systemd-flag-158600] to gather additional debugging logs...
	I1217 01:36:42.620714    2932 cli_runner.go:164] Run: docker network inspect force-systemd-flag-158600
	W1217 01:36:42.669725    2932 cli_runner.go:211] docker network inspect force-systemd-flag-158600 returned with exit code 1
	I1217 01:36:42.669725    2932 network_create.go:287] error running [docker network inspect force-systemd-flag-158600]: docker network inspect force-systemd-flag-158600: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network force-systemd-flag-158600 not found
	I1217 01:36:42.669725    2932 network_create.go:289] output of [docker network inspect force-systemd-flag-158600]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network force-systemd-flag-158600 not found
	
	** /stderr **
	I1217 01:36:42.672722    2932 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:36:42.743721    2932 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:36:42.775719    2932 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:36:42.791713    2932 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0017c70e0}
	I1217 01:36:42.791713    2932 network_create.go:124] attempt to create docker network force-systemd-flag-158600 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1217 01:36:42.795720    2932 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-158600 force-systemd-flag-158600
	W1217 01:36:42.842715    2932 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-158600 force-systemd-flag-158600 returned with exit code 1
	W1217 01:36:42.842715    2932 network_create.go:149] failed to create docker network force-systemd-flag-158600 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-158600 force-systemd-flag-158600: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1217 01:36:42.842715    2932 network_create.go:116] failed to create docker network force-systemd-flag-158600 192.168.67.0/24, will retry: subnet is taken
	I1217 01:36:42.870722    2932 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:36:42.902720    2932 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:36:42.918724    2932 network.go:206] using free private subnet 192.168.85.0/24: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0016dd470}
	I1217 01:36:42.918724    2932 network_create.go:124] attempt to create docker network force-systemd-flag-158600 192.168.85.0/24 with gateway 192.168.85.1 and MTU of 1500 ...
	I1217 01:36:42.921721    2932 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.85.0/24 --gateway=192.168.85.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=force-systemd-flag-158600 force-systemd-flag-158600
	I1217 01:36:43.061721    2932 network_create.go:108] docker network force-systemd-flag-158600 192.168.85.0/24 created
	I1217 01:36:43.061721    2932 kic.go:121] calculated static IP "192.168.85.2" for the "force-systemd-flag-158600" container
	I1217 01:36:43.067724    2932 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:36:43.123731    2932 cli_runner.go:164] Run: docker volume create force-systemd-flag-158600 --label name.minikube.sigs.k8s.io=force-systemd-flag-158600 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:36:43.191721    2932 oci.go:103] Successfully created a docker volume force-systemd-flag-158600
	I1217 01:36:43.197723    2932 cli_runner.go:164] Run: docker run --rm --name force-systemd-flag-158600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-158600 --entrypoint /usr/bin/test -v force-systemd-flag-158600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:36:45.082006    2932 cli_runner.go:217] Completed: docker run --rm --name force-systemd-flag-158600-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-158600 --entrypoint /usr/bin/test -v force-systemd-flag-158600:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.8842564s)
	I1217 01:36:45.082006    2932 oci.go:107] Successfully prepared a docker volume force-systemd-flag-158600
	I1217 01:36:45.082006    2932 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
	I1217 01:36:45.082006    2932 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:36:45.085021    2932 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-158600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:37:11.879557    2932 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v force-systemd-flag-158600:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (26.7941365s)
	I1217 01:37:11.879634    2932 kic.go:203] duration metric: took 26.7972555s to extract preloaded images to volume ...
	I1217 01:37:11.886482    2932 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:37:12.285024    2932 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:0 ContainersPaused:0 ContainersStopped:1 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:58 OomKillDisable:true NGoroutines:93 SystemTime:2025-12-17 01:37:12.262258163 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:37:12.292026    2932 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:37:12.690530    2932 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname force-systemd-flag-158600 --name force-systemd-flag-158600 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=force-systemd-flag-158600 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=force-systemd-flag-158600 --network force-systemd-flag-158600 --ip 192.168.85.2 --volume force-systemd-flag-158600:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:37:13.622775    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Running}}
	I1217 01:37:13.676703    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:13.735516    2932 cli_runner.go:164] Run: docker exec force-systemd-flag-158600 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:37:13.840517    2932 oci.go:144] the created container "force-systemd-flag-158600" has a running status.
	I1217 01:37:13.840517    2932 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\id_rsa...
	I1217 01:37:14.181087    2932 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\id_rsa.pub -> /home/docker/.ssh/authorized_keys
	I1217 01:37:14.198114    2932 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:37:14.296099    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:14.365103    2932 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:37:14.365103    2932 kic_runner.go:114] Args: [docker exec --privileged force-systemd-flag-158600 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:37:14.494104    2932 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\id_rsa...
	I1217 01:37:17.377815    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:17.438198    2932 machine.go:94] provisionDockerMachine start ...
	I1217 01:37:17.443218    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:17.500207    2932 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:17.514705    2932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60517 <nil> <nil>}
	I1217 01:37:17.514705    2932 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:37:17.700157    2932 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-158600
	
	I1217 01:37:17.700157    2932 ubuntu.go:182] provisioning hostname "force-systemd-flag-158600"
	I1217 01:37:17.705149    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:17.759147    2932 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:17.759147    2932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60517 <nil> <nil>}
	I1217 01:37:17.759147    2932 main.go:143] libmachine: About to run SSH command:
	sudo hostname force-systemd-flag-158600 && echo "force-systemd-flag-158600" | sudo tee /etc/hostname
	I1217 01:37:17.940511    2932 main.go:143] libmachine: SSH cmd err, output: <nil>: force-systemd-flag-158600
	
	I1217 01:37:17.945524    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:18.014521    2932 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:18.014521    2932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60517 <nil> <nil>}
	I1217 01:37:18.015526    2932 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sforce-systemd-flag-158600' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 force-systemd-flag-158600/g' /etc/hosts;
				else 
					echo '127.0.1.1 force-systemd-flag-158600' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:37:18.192230    2932 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:37:18.192230    2932 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:37:18.192230    2932 ubuntu.go:190] setting up certificates
	I1217 01:37:18.192230    2932 provision.go:84] configureAuth start
	I1217 01:37:18.196427    2932 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" force-systemd-flag-158600
	I1217 01:37:18.248845    2932 provision.go:143] copyHostCerts
	I1217 01:37:18.248845    2932 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1217 01:37:18.249838    2932 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:37:18.249838    2932 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:37:18.249838    2932 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:37:18.250841    2932 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1217 01:37:18.250841    2932 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:37:18.250841    2932 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:37:18.250841    2932 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:37:18.251840    2932 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1217 01:37:18.251840    2932 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:37:18.251840    2932 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:37:18.251840    2932 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:37:18.252839    2932 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.force-systemd-flag-158600 san=[127.0.0.1 192.168.85.2 force-systemd-flag-158600 localhost minikube]
	I1217 01:37:18.365070    2932 provision.go:177] copyRemoteCerts
	I1217 01:37:18.369821    2932 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:37:18.373642    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:18.429614    2932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60517 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\id_rsa Username:docker}
	I1217 01:37:18.549647    2932 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1217 01:37:18.549647    2932 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:37:18.581675    2932 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1217 01:37:18.582429    2932 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 01:37:18.610830    2932 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1217 01:37:18.623700    2932 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:37:18.659964    2932 provision.go:87] duration metric: took 467.7266ms to configureAuth
	I1217 01:37:18.659964    2932 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:37:18.659964    2932 config.go:182] Loaded profile config "force-systemd-flag-158600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:37:18.663960    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:18.717951    2932 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:18.717951    2932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60517 <nil> <nil>}
	I1217 01:37:18.717951    2932 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:37:18.882796    2932 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:37:18.882796    2932 ubuntu.go:71] root file system type: overlay
	I1217 01:37:18.882796    2932 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:37:18.887779    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:18.941776    2932 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:18.942779    2932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60517 <nil> <nil>}
	I1217 01:37:18.942779    2932 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:37:19.133783    2932 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:37:19.139077    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:19.192568    2932 main.go:143] libmachine: Using SSH client type: native
	I1217 01:37:19.193564    2932 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60517 <nil> <nil>}
	I1217 01:37:19.193564    2932 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:37:20.018459    2932 main.go:143] libmachine: SSH cmd err, output: Process exited with status 1: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:37:19.119145368 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	I1217 01:37:20.018459    2932 ubuntu.go:208] Error setting container-runtime options during provisioning ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:37:19.119145368 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I1217 01:37:20.018459    2932 machine.go:97] duration metric: took 2.5802246s to provisionDockerMachine
	I1217 01:37:20.018459    2932 client.go:176] duration metric: took 37.4572198s to LocalClient.Create
	I1217 01:37:22.023616    2932 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:37:22.026835    2932 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" force-systemd-flag-158600
	I1217 01:37:22.077618    2932 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60517 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\id_rsa Username:docker}
	I1217 01:37:22.197616    2932 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:37:22.206419    2932 start.go:128] duration metric: took 39.6491378s to createHost
	I1217 01:37:22.206419    2932 start.go:83] releasing machines lock for "force-systemd-flag-158600", held for 39.6491378s
	W1217 01:37:22.206496    2932 start.go:715] error starting host: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:37:19.119145368 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I1217 01:37:22.213789    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:22.265018    2932 stop.go:39] StopHost: force-systemd-flag-158600
	W1217 01:37:22.265018    2932 register.go:133] "Stopping" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I1217 01:37:22.326584    2932 out.go:179] * Stopping node "force-systemd-flag-158600"  ...
	I1217 01:37:22.392368    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	W1217 01:37:22.444411    2932 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I1217 01:37:22.516420    2932 out.go:179] * Powering off "force-systemd-flag-158600" via SSH ...
	I1217 01:37:22.577266    2932 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-158600 /bin/bash -c "sudo init 0"
	I1217 01:37:23.833907    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:24.945815    2932 cli_runner.go:217] Completed: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}: (1.1118555s)
	I1217 01:37:24.945815    2932 oci.go:667] container force-systemd-flag-158600 status is Stopped
	I1217 01:37:24.945815    2932 oci.go:679] Successfully shutdown container force-systemd-flag-158600
	I1217 01:37:24.945815    2932 stop.go:96] shutdown container: err=<nil>
	I1217 01:37:24.945815    2932 main.go:143] libmachine: Stopping "force-systemd-flag-158600"...
	I1217 01:37:24.953874    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:25.009218    2932 stop.go:66] stop err: Machine "force-systemd-flag-158600" is already stopped.
	I1217 01:37:25.009218    2932 stop.go:69] host is already stopped
	W1217 01:37:26.009641    2932 register.go:133] "PowerOff" was not found within the registered steps for "Initial Minikube Setup": [Initial Minikube Setup Selecting Driver Downloading Artifacts Starting Node Updating Driver Pulling Base Image Running on Localhost Local OS Release Creating Container Creating VM Running Remotely Preparing Kubernetes Generating certificates Booting control plane Configuring RBAC rules Configuring CNI Configuring Localhost Environment Verifying Kubernetes Enabling Addons Done]
	I1217 01:37:26.013664    2932 out.go:179] * Deleting "force-systemd-flag-158600" in docker ...
	I1217 01:37:26.018642    2932 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-158600
	I1217 01:37:26.079318    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:26.135187    2932 cli_runner.go:164] Run: docker exec --privileged -t force-systemd-flag-158600 /bin/bash -c "sudo init 0"
	W1217 01:37:26.199764    2932 cli_runner.go:211] docker exec --privileged -t force-systemd-flag-158600 /bin/bash -c "sudo init 0" returned with exit code 1
	I1217 01:37:26.200298    2932 oci.go:659] error shutdown force-systemd-flag-158600: docker exec --privileged -t force-systemd-flag-158600 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: container 517abfeacec1a112e4fa6bade9666ffe9d9413980f500ba95b8a142ea14177e8 is not running
	I1217 01:37:27.207823    2932 cli_runner.go:164] Run: docker container inspect force-systemd-flag-158600 --format={{.State.Status}}
	I1217 01:37:27.275813    2932 oci.go:667] container force-systemd-flag-158600 status is Stopped
	I1217 01:37:27.275813    2932 oci.go:679] Successfully shutdown container force-systemd-flag-158600
	I1217 01:37:27.281812    2932 cli_runner.go:164] Run: docker rm -f -v force-systemd-flag-158600
	I1217 01:37:27.376096    2932 cli_runner.go:164] Run: docker container inspect -f {{.Id}} force-systemd-flag-158600
	W1217 01:37:27.430115    2932 cli_runner.go:211] docker container inspect -f {{.Id}} force-systemd-flag-158600 returned with exit code 1
	I1217 01:37:27.435089    2932 cli_runner.go:164] Run: docker network inspect force-systemd-flag-158600 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:37:27.497102    2932 cli_runner.go:164] Run: docker network rm force-systemd-flag-158600
	W1217 01:37:27.893725    2932 start.go:720] delete host: api remove: unlinkat C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\id_rsa: The process cannot access the file because it is being used by another process.
	W1217 01:37:27.895080    2932 out.go:285] ! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:37:19.119145368 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	! StartHost failed, but will try again: creating host: create: provisioning: ssh command error:
	command : sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	err     : Process exited with status 1
	output  : --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:37:19.119145368 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	I1217 01:37:27.895080    2932 start.go:730] Will try again in 5 seconds ...
	I1217 01:37:32.895416    2932 start.go:360] acquireMachinesLock for force-systemd-flag-158600: {Name:mkee988271dfb6318f40bf08ac96fd6342fbea6a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:37:32.895416    2932 start.go:364] duration metric: took 0s to acquireMachinesLock for "force-systemd-flag-158600"
	I1217 01:37:32.895416    2932 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:37:32.895416    2932 fix.go:54] fixHost starting: 
	I1217 01:37:32.896083    2932 fix.go:56] duration metric: took 667µs for fixHost
	I1217 01:37:32.896196    2932 start.go:83] releasing machines lock for "force-systemd-flag-158600", held for 779.5µs
	W1217 01:37:32.896546    2932 out.go:285] * Failed to start docker container. Running "minikube delete -p force-systemd-flag-158600" may fix it: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "force-systemd-flag-158600": open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\config.json: The system cannot find the file specified.
	* Failed to start docker container. Running "minikube delete -p force-systemd-flag-158600" may fix it: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "force-systemd-flag-158600": open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\config.json: The system cannot find the file specified.
	I1217 01:37:32.906252    2932 out.go:203] 
	W1217 01:37:32.908180    2932 out.go:285] X Exiting due to GUEST_NOT_FOUND: Failed to start host: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "force-systemd-flag-158600": open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\config.json: The system cannot find the file specified.
	X Exiting due to GUEST_NOT_FOUND: Failed to start host: error loading existing host. Please try running [minikube delete], then run [minikube start] again: filestore "force-systemd-flag-158600": open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\config.json: The system cannot find the file specified.
	W1217 01:37:32.908180    2932 out.go:285] * Suggestion: minikube is missing files relating to your guest environment. This can be fixed by running 'minikube delete'
	* Suggestion: minikube is missing files relating to your guest environment. This can be fixed by running 'minikube delete'
	W1217 01:37:32.908180    2932 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/9130
	* Related issue: https://github.com/kubernetes/minikube/issues/9130
	I1217 01:37:32.910180    2932 out.go:203] 

                                                
                                                
** /stderr **
docker_test.go:93: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p force-systemd-flag-158600 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker" : exit status 85
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-158600 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p force-systemd-flag-158600 ssh "docker info --format {{.CgroupDriver}}": exit status 80 (211.4264ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STATUS: Unable to get control-plane node force-systemd-flag-158600 host status: load: filestore "force-systemd-flag-158600": open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\config.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
docker_test.go:112: failed to get docker cgroup driver. args "out/minikube-windows-amd64.exe -p force-systemd-flag-158600 ssh \"docker info --format {{.CgroupDriver}}\"": exit status 80
docker_test.go:106: *** TestForceSystemdFlag FAILED at 2025-12-17 01:37:33.2606076 +0000 UTC m=+5548.296810901
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestForceSystemdFlag]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestForceSystemdFlag]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect force-systemd-flag-158600
helpers_test.go:244: (dbg) docker inspect force-systemd-flag-158600:

                                                
                                                
-- stdout --
	[
	    {
	        "CreatedAt": "2025-12-17T01:36:43Z",
	        "Driver": "local",
	        "Labels": {
	            "created_by.minikube.sigs.k8s.io": "true",
	            "name.minikube.sigs.k8s.io": "force-systemd-flag-158600"
	        },
	        "Mountpoint": "/var/lib/docker/volumes/force-systemd-flag-158600/_data",
	        "Name": "force-systemd-flag-158600",
	        "Options": null,
	        "Scope": "local"
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-158600 -n force-systemd-flag-158600
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p force-systemd-flag-158600 -n force-systemd-flag-158600: exit status 7 (151.4486ms)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 01:37:33.461128    9312 status.go:121] status error: host: load: filestore "force-systemd-flag-158600": open C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\force-systemd-flag-158600\config.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 7 (may be ok)
helpers_test.go:250: "force-systemd-flag-158600" host is not running, skipping log retrieval (state="Nonexistent")
helpers_test.go:176: Cleaning up "force-systemd-flag-158600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-158600
--- FAIL: TestForceSystemdFlag (53.11s)

                                                
                                    
x
+
TestErrorSpam/setup (48.12s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-365700 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 --driver=docker
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-365700 -n=1 --memory=3072 --wait=false --log_dir=C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 --driver=docker: (48.1202759s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube container"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-365700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
- KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
- MINIKUBE_LOCATION=22168
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the docker driver based on user configuration
* Using Docker Desktop driver with root privileges
* Starting "nospam-365700" primary control-plane node in "nospam-365700" cluster
* Pulling base image v0.0.48-1765661130-22141 ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-365700" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube container
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (48.12s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (519.16s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0
E1217 00:25:33.688511    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.096806    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.103683    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.115493    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.137597    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.179149    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.261815    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.423442    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:14.745291    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:15.386932    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:16.668781    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:19.231491    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:24.353987    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:34.596941    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:28:55.079219    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:29:36.041611    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:30:33.691392    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:30:57.964135    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:31:56.762602    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m36.2650177s)

                                                
                                                
-- stdout --
	* [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Found network options:
	  - HTTP_PROXY=localhost:56612
	* Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	  - HTTP_PROXY=localhost:56612
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-409700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-409700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001424837s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000178587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000178587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:2241: failed minikube start. args "out/minikube-windows-amd64.exe start -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 6 (657.0647ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:32:23.616404   11060 status.go:458] kubeconfig endpoint: get endpoint: "functional-409700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.2706015s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                ARGS                                                                                 │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-045600 ssh sudo cat /usr/share/ca-certificates/41682.pem                                                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ ssh            │ functional-045600 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                            │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ service        │ functional-045600 service hello-node --url --format={{.IP}}                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ ssh            │ functional-045600 ssh sudo cat /etc/test/nested/copy/4168/hosts                                                                                                     │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ docker-env     │ functional-045600 docker-env                                                                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ dashboard      │ --url --port 36195 -p functional-045600 --alsologtostderr -v=1                                                                                                      │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ service        │ functional-045600 service hello-node --url                                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ cp             │ functional-045600 cp testdata\cp-test.txt /home/docker/cp-test.txt                                                                                                  │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /home/docker/cp-test.txt                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ cp             │ functional-045600 cp functional-045600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2737548863\001\cp-test.txt │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /home/docker/cp-test.txt                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ cp             │ functional-045600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                           │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format short --alsologtostderr                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format json --alsologtostderr                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format table --alsologtostderr                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format yaml --alsologtostderr                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh pgrep buildkitd                                                                                                                               │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │                     │
	│ image          │ functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls                                                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ delete         │ -p functional-045600                                                                                                                                                │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │ 17 Dec 25 00:23 UTC │
	│ start          │ -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:23:46
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:23:46.718170    9844 out.go:360] Setting OutFile to fd 1352 ...
	I1217 00:23:46.761460    9844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:23:46.761460    9844 out.go:374] Setting ErrFile to fd 2024...
	I1217 00:23:46.761460    9844 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:23:46.775730    9844 out.go:368] Setting JSON to false
	I1217 00:23:46.777734    9844 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2615,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:23:46.777734    9844 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:23:46.784368    9844 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:23:46.787863    9844 notify.go:221] Checking for updates...
	I1217 00:23:46.787934    9844 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:23:46.789638    9844 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:23:46.792180    9844 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:23:46.794438    9844 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:23:46.797216    9844 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:23:46.799986    9844 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:23:46.913802    9844 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:23:46.917482    9844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:23:47.149399    9844 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-17 00:23:47.131599516 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:23:47.155157    9844 out.go:179] * Using the docker driver based on user configuration
	I1217 00:23:47.157729    9844 start.go:309] selected driver: docker
	I1217 00:23:47.157729    9844 start.go:927] validating driver "docker" against <nil>
	I1217 00:23:47.157729    9844 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:23:47.256648    9844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:23:47.483027    9844 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-17 00:23:47.462721307 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:23:47.483027    9844 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:23:47.483689    9844 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:23:47.486540    9844 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 00:23:47.488518    9844 cni.go:84] Creating CNI manager for ""
	I1217 00:23:47.488603    9844 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:23:47.488637    9844 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	W1217 00:23:47.488810    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	W1217 00:23:47.488933    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	I1217 00:23:47.489009    9844 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:23:47.491594    9844 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:23:47.495496    9844 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:23:47.497610    9844 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:23:47.500010    9844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:23:47.500010    9844 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:23:47.500010    9844 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:23:47.500010    9844 cache.go:65] Caching tarball of preloaded images
	I1217 00:23:47.501031    9844 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:23:47.501031    9844 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:23:47.501031    9844 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:23:47.501031    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json: {Name:mk5c5b3ba594212bf692a250a66f1ade24713b43 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:23:47.577658    9844 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:23:47.577658    9844 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:23:47.577658    9844 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:23:47.577658    9844 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:23:47.578616    9844 start.go:364] duration metric: took 876.4µs to acquireMachinesLock for "functional-409700"
	I1217 00:23:47.578724    9844 start.go:93] Provisioning new machine with config: &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 00:23:47.578911    9844 start.go:125] createHost starting for "" (driver="docker")
	I1217 00:23:47.582292    9844 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	W1217 00:23:47.582292    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	W1217 00:23:47.582292    9844 out.go:285] ! Local proxy ignored: not passing HTTP_PROXY=localhost:56612 to docker env.
	I1217 00:23:47.582292    9844 start.go:159] libmachine.API.Create for "functional-409700" (driver="docker")
	I1217 00:23:47.582807    9844 client.go:173] LocalClient.Create starting
	I1217 00:23:47.582936    9844 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 00:23:47.582936    9844 main.go:143] libmachine: Decoding PEM data...
	I1217 00:23:47.582936    9844 main.go:143] libmachine: Parsing certificate...
	I1217 00:23:47.583547    9844 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 00:23:47.583630    9844 main.go:143] libmachine: Decoding PEM data...
	I1217 00:23:47.583630    9844 main.go:143] libmachine: Parsing certificate...
	I1217 00:23:47.588137    9844 cli_runner.go:164] Run: docker network inspect functional-409700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 00:23:47.639543    9844 cli_runner.go:211] docker network inspect functional-409700 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 00:23:47.642546    9844 network_create.go:284] running [docker network inspect functional-409700] to gather additional debugging logs...
	I1217 00:23:47.642546    9844 cli_runner.go:164] Run: docker network inspect functional-409700
	W1217 00:23:47.694302    9844 cli_runner.go:211] docker network inspect functional-409700 returned with exit code 1
	I1217 00:23:47.694302    9844 network_create.go:287] error running [docker network inspect functional-409700]: docker network inspect functional-409700: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network functional-409700 not found
	I1217 00:23:47.694302    9844 network_create.go:289] output of [docker network inspect functional-409700]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network functional-409700 not found
	
	** /stderr **
	I1217 00:23:47.698006    9844 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 00:23:47.764476    9844 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0018a4090}
	I1217 00:23:47.764476    9844 network_create.go:124] attempt to create docker network functional-409700 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1217 00:23:47.767465    9844 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=functional-409700 functional-409700
	I1217 00:23:47.904530    9844 network_create.go:108] docker network functional-409700 192.168.49.0/24 created
	I1217 00:23:47.904584    9844 kic.go:121] calculated static IP "192.168.49.2" for the "functional-409700" container
	I1217 00:23:47.912323    9844 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 00:23:47.969708    9844 cli_runner.go:164] Run: docker volume create functional-409700 --label name.minikube.sigs.k8s.io=functional-409700 --label created_by.minikube.sigs.k8s.io=true
	I1217 00:23:48.029499    9844 oci.go:103] Successfully created a docker volume functional-409700
	I1217 00:23:48.033053    9844 cli_runner.go:164] Run: docker run --rm --name functional-409700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-409700 --entrypoint /usr/bin/test -v functional-409700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 00:23:49.458126    9844 cli_runner.go:217] Completed: docker run --rm --name functional-409700-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-409700 --entrypoint /usr/bin/test -v functional-409700:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.4249479s)
	I1217 00:23:49.458223    9844 oci.go:107] Successfully prepared a docker volume functional-409700
	I1217 00:23:49.458268    9844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:23:49.458363    9844 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 00:23:49.461758    9844 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-409700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 00:24:04.685097    9844 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v functional-409700:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (15.2226444s)
	I1217 00:24:04.685097    9844 kic.go:203] duration metric: took 15.2266429s to extract preloaded images to volume ...
	I1217 00:24:04.689894    9844 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:24:04.926488    9844 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:54 OomKillDisable:true NGoroutines:80 SystemTime:2025-12-17 00:24:04.904437643 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:24:04.931344    9844 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 00:24:05.168658    9844 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname functional-409700 --name functional-409700 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=functional-409700 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=functional-409700 --network functional-409700 --ip 192.168.49.2 --volume functional-409700:/var --security-opt apparmor=unconfined --memory=4096mb --memory-swap=4096mb --cpus=2 -e container=docker --expose 8441 --publish=127.0.0.1::8441 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 00:24:05.852446    9844 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Running}}
	I1217 00:24:05.913723    9844 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:24:05.973726    9844 cli_runner.go:164] Run: docker exec functional-409700 stat /var/lib/dpkg/alternatives/iptables
	I1217 00:24:06.077314    9844 oci.go:144] the created container "functional-409700" has a running status.
	I1217 00:24:06.077314    9844 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa...
	I1217 00:24:06.195608    9844 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 00:24:06.272529    9844 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:24:06.333536    9844 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 00:24:06.333536    9844 kic_runner.go:114] Args: [docker exec --privileged functional-409700 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 00:24:06.493753    9844 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa...
	I1217 00:24:08.590309    9844 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:24:08.647076    9844 machine.go:94] provisionDockerMachine start ...
	I1217 00:24:08.650813    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:08.702605    9844 main.go:143] libmachine: Using SSH client type: native
	I1217 00:24:08.716703    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:24:08.716703    9844 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:24:08.896051    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:24:08.896051    9844 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:24:08.899066    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:08.957262    9844 main.go:143] libmachine: Using SSH client type: native
	I1217 00:24:08.957330    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:24:08.957330    9844 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:24:09.143166    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:24:09.146980    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:09.206537    9844 main.go:143] libmachine: Using SSH client type: native
	I1217 00:24:09.206537    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:24:09.206537    9844 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:24:09.378471    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:24:09.378516    9844 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:24:09.378516    9844 ubuntu.go:190] setting up certificates
	I1217 00:24:09.378516    9844 provision.go:84] configureAuth start
	I1217 00:24:09.382296    9844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:24:09.436245    9844 provision.go:143] copyHostCerts
	I1217 00:24:09.436245    9844 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:24:09.436245    9844 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:24:09.436245    9844 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:24:09.437774    9844 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:24:09.437774    9844 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:24:09.437774    9844 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:24:09.439094    9844 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:24:09.439094    9844 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:24:09.439387    9844 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:24:09.439832    9844 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:24:09.475981    9844 provision.go:177] copyRemoteCerts
	I1217 00:24:09.479981    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:24:09.483765    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:09.540183    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:24:09.661076    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:24:09.688793    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:24:09.717687    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:24:09.742867    9844 provision.go:87] duration metric: took 364.3482ms to configureAuth
	I1217 00:24:09.742867    9844 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:24:09.743549    9844 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:24:09.747733    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:09.806162    9844 main.go:143] libmachine: Using SSH client type: native
	I1217 00:24:09.806745    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:24:09.806745    9844 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:24:09.977954    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:24:09.977998    9844 ubuntu.go:71] root file system type: overlay
	I1217 00:24:09.978025    9844 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:24:09.982114    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:10.037640    9844 main.go:143] libmachine: Using SSH client type: native
	I1217 00:24:10.037850    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:24:10.037850    9844 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:24:10.218713    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:24:10.224064    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:10.282863    9844 main.go:143] libmachine: Using SSH client type: native
	I1217 00:24:10.283330    9844 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:24:10.283355    9844 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:24:11.709294    9844 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 00:24:10.215250995 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1217 00:24:11.709294    9844 machine.go:97] duration metric: took 3.0620691s to provisionDockerMachine
	I1217 00:24:11.709294    9844 client.go:176] duration metric: took 24.1262969s to LocalClient.Create
	I1217 00:24:11.709355    9844 start.go:167] duration metric: took 24.1268721s to libmachine.API.Create "functional-409700"
	I1217 00:24:11.709355    9844 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:24:11.709397    9844 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:24:11.713396    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:24:11.716373    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:11.768794    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:24:11.898420    9844 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:24:11.906925    9844 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:24:11.906925    9844 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:24:11.907002    9844 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:24:11.907246    9844 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:24:11.907246    9844 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:24:11.907945    9844 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:24:11.912171    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:24:11.925669    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:24:11.957633    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:24:11.987126    9844 start.go:296] duration metric: took 277.7271ms for postStartSetup
	I1217 00:24:11.993197    9844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:24:12.048195    9844 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:24:12.054610    9844 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:24:12.057315    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:12.112900    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:24:12.233919    9844 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:24:12.244846    9844 start.go:128] duration metric: took 24.6657395s to createHost
	I1217 00:24:12.244846    9844 start.go:83] releasing machines lock for "functional-409700", held for 24.6660127s
	I1217 00:24:12.249479    9844 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:24:12.307018    9844 out.go:179] * Found network options:
	I1217 00:24:12.310453    9844 out.go:179]   - HTTP_PROXY=localhost:56612
	W1217 00:24:12.312492    9844 out.go:285] ! You appear to be using a proxy, but your NO_PROXY environment does not include the minikube IP (192.168.49.2).
	I1217 00:24:12.315143    9844 out.go:179] * Please see https://minikube.sigs.k8s.io/docs/handbook/vpn_and_proxy/ for more details
	I1217 00:24:12.318686    9844 out.go:179]   - HTTP_PROXY=localhost:56612
	I1217 00:24:12.320516    9844 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:24:12.324516    9844 ssh_runner.go:195] Run: cat /version.json
	I1217 00:24:12.324516    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:12.327532    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:12.381710    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:24:12.385633    9844 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	W1217 00:24:12.505196    9844 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:24:12.509937    9844 ssh_runner.go:195] Run: systemctl --version
	I1217 00:24:12.524921    9844 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:24:12.533253    9844 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:24:12.538113    9844 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:24:12.594249    9844 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 00:24:12.594249    9844 start.go:496] detecting cgroup driver to use...
	I1217 00:24:12.594249    9844 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:24:12.594249    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:24:12.621799    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:24:12.640104    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:24:12.656003    9844 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:24:12.661151    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 00:24:12.680746    9844 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:24:12.680746    9844 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:24:12.685583    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:24:12.706167    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:24:12.724452    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:24:12.746659    9844 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:24:12.766714    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:24:12.789229    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:24:12.812738    9844 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:24:12.834134    9844 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:24:12.852171    9844 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:24:12.869479    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:24:13.013493    9844 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:24:13.173473    9844 start.go:496] detecting cgroup driver to use...
	I1217 00:24:13.173473    9844 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:24:13.178209    9844 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:24:13.201824    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:24:13.225522    9844 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:24:13.289619    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:24:13.312222    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:24:13.331423    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:24:13.359799    9844 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:24:13.371182    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:24:13.384082    9844 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:24:13.408094    9844 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:24:13.541791    9844 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:24:13.697803    9844 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:24:13.697803    9844 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:24:13.722897    9844 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:24:13.746155    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:24:13.881555    9844 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:24:14.775761    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:24:14.798543    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:24:14.824598    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:24:14.850587    9844 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:24:14.995944    9844 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:24:15.132286    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:24:15.262368    9844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:24:15.288354    9844 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:24:15.310745    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:24:15.441933    9844 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:24:15.554282    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:24:15.573398    9844 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:24:15.578429    9844 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:24:15.584473    9844 start.go:564] Will wait 60s for crictl version
	I1217 00:24:15.589114    9844 ssh_runner.go:195] Run: which crictl
	I1217 00:24:15.601099    9844 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:24:15.645307    9844 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:24:15.649173    9844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:24:15.688505    9844 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:24:15.734729    9844 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:24:15.738385    9844 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:24:15.875268    9844 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:24:15.880039    9844 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:24:15.887164    9844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:24:15.908060    9844 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:24:15.964725    9844 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:24:15.964725    9844 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:24:15.968846    9844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:24:15.998917    9844 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:24:15.998917    9844 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:24:16.004000    9844 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:24:16.035479    9844 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:24:16.035479    9844 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:24:16.035479    9844 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:24:16.035479    9844 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:24:16.038967    9844 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:24:16.113755    9844 cni.go:84] Creating CNI manager for ""
	I1217 00:24:16.113755    9844 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:24:16.113755    9844 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:24:16.113755    9844 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:24:16.114282    9844 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:24:16.118464    9844 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:24:16.131001    9844 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:24:16.134762    9844 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:24:16.148897    9844 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:24:16.171419    9844 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:24:16.192693    9844 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 00:24:16.217622    9844 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:24:16.225421    9844 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 00:24:16.245987    9844 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:24:16.383602    9844 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:24:16.404960    9844 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:24:16.404960    9844 certs.go:195] generating shared ca certs ...
	I1217 00:24:16.405012    9844 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:16.405618    9844 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:24:16.405874    9844 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:24:16.406036    9844 certs.go:257] generating profile certs ...
	I1217 00:24:16.406389    9844 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:24:16.406389    9844 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.crt with IP's: []
	I1217 00:24:16.542685    9844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.crt ...
	I1217 00:24:16.542685    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.crt: {Name:mkf1fef4357c6d7923877090bd408b05141344ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:16.543697    9844 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key ...
	I1217 00:24:16.543697    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key: {Name:mk66e5b2fd47c45fac2030862847dbb223431176 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:16.544692    9844 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:24:16.544692    9844 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt.dc66fb1b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1217 00:24:16.711533    9844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt.dc66fb1b ...
	I1217 00:24:16.711533    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt.dc66fb1b: {Name:mkf17061e96e0fe2306af8ad88c131406a243550 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:16.712533    9844 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b ...
	I1217 00:24:16.712533    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b: {Name:mk5956281edbfe16838f2de2fe166af0d3fd53d9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:16.713538    9844 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt.dc66fb1b -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt
	I1217 00:24:16.727534    9844 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key
	I1217 00:24:16.728539    9844 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:24:16.728539    9844 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt with IP's: []
	I1217 00:24:16.806088    9844 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt ...
	I1217 00:24:16.806088    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt: {Name:mk2ffa70dfc888c9001796afb833ccaece1fbfe3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:16.807089    9844 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key ...
	I1217 00:24:16.807089    9844 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key: {Name:mk621f64ed966558e8ae29639690ce013af318f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:24:16.821086    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:24:16.821086    9844 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:24:16.822091    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:24:16.822091    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:24:16.822091    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:24:16.822091    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:24:16.822091    9844 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:24:16.823092    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:24:16.854616    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:24:16.880951    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:24:16.909818    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:24:16.940834    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:24:16.965917    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:24:16.994075    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:24:17.023698    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:24:17.050301    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:24:17.078207    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:24:17.103990    9844 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:24:17.132768    9844 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:24:17.160276    9844 ssh_runner.go:195] Run: openssl version
	I1217 00:24:17.173846    9844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:17.194343    9844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:24:17.212532    9844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:17.221817    9844 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:17.225884    9844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:24:17.272336    9844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:24:17.290621    9844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 00:24:17.309775    9844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:24:17.327067    9844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:24:17.343571    9844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:24:17.352868    9844 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:24:17.357821    9844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:24:17.407276    9844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:24:17.424387    9844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 00:24:17.443573    9844 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:24:17.463086    9844 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:24:17.482713    9844 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:24:17.490636    9844 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:24:17.494879    9844 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:24:17.545030    9844 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:24:17.563092    9844 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 00:24:17.579535    9844 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:24:17.587726    9844 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 00:24:17.587726    9844 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:24:17.591280    9844 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:24:17.622591    9844 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:24:17.641848    9844 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:24:17.656744    9844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:24:17.662480    9844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:24:17.676156    9844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:24:17.676156    9844 kubeadm.go:158] found existing configuration files:
	
	I1217 00:24:17.682532    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:24:17.697983    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:24:17.702319    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:24:17.718352    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:24:17.731965    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:24:17.736572    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:24:17.754653    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:24:17.767790    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:24:17.772610    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:24:17.790900    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:24:17.804714    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:24:17.809728    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:24:17.828618    9844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:24:17.939995    9844 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:24:18.029042    9844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:24:18.149275    9844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:28:20.004341    9844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:28:20.004397    9844 kubeadm.go:319] 
	I1217 00:28:20.004554    9844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:28:20.010318    9844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:28:20.010318    9844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:28:20.010318    9844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:28:20.010318    9844 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:28:20.010862    9844 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:28:20.011007    9844 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:28:20.011111    9844 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:28:20.011216    9844 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:28:20.011323    9844 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:28:20.011384    9844 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:28:20.011384    9844 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:28:20.011384    9844 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:28:20.011384    9844 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:28:20.011384    9844 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:28:20.011384    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:28:20.011927    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:28:20.012078    9844 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:28:20.012078    9844 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:28:20.012237    9844 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:28:20.012385    9844 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:28:20.012475    9844 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:28:20.012564    9844 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:28:20.012697    9844 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:28:20.012827    9844 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:28:20.012962    9844 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:28:20.013005    9844 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:28:20.013138    9844 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:28:20.013269    9844 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:28:20.013379    9844 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:28:20.013465    9844 kubeadm.go:319] OS: Linux
	I1217 00:28:20.013552    9844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:28:20.013638    9844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:28:20.013728    9844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:28:20.013817    9844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:28:20.013960    9844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:28:20.014021    9844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:28:20.014188    9844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:28:20.014280    9844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:28:20.014378    9844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:28:20.014563    9844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:28:20.014837    9844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:28:20.015024    9844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:28:20.015148    9844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:28:20.018617    9844 out.go:252]   - Generating certificates and keys ...
	I1217 00:28:20.018617    9844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:28:20.018617    9844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [functional-409700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 00:28:20.019620    9844 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [functional-409700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1217 00:28:20.020625    9844 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 00:28:20.020625    9844 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 00:28:20.020625    9844 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 00:28:20.020625    9844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:28:20.020625    9844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:28:20.020625    9844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:28:20.020625    9844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:28:20.020625    9844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:28:20.020625    9844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:28:20.020625    9844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:28:20.021628    9844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:28:20.023743    9844 out.go:252]   - Booting up control plane ...
	I1217 00:28:20.023743    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:28:20.023743    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:28:20.023743    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:28:20.023743    9844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:28:20.023743    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:28:20.024746    9844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:28:20.024746    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:28:20.024746    9844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:28:20.024746    9844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:28:20.024746    9844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:28:20.024746    9844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001424837s
	I1217 00:28:20.024746    9844 kubeadm.go:319] 
	I1217 00:28:20.024746    9844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:28:20.024746    9844 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:28:20.025765    9844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:28:20.025765    9844 kubeadm.go:319] 
	I1217 00:28:20.025765    9844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:28:20.025765    9844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:28:20.025765    9844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:28:20.025765    9844 kubeadm.go:319] 
	W1217 00:28:20.025765    9844 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [functional-409700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [functional-409700 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001424837s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:28:20.031190    9844 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:28:20.491073    9844 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:28:20.509504    9844 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:28:20.514420    9844 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:28:20.526860    9844 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:28:20.526860    9844 kubeadm.go:158] found existing configuration files:
	
	I1217 00:28:20.531605    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:28:20.544678    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:28:20.548953    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:28:20.566177    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:28:20.581621    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:28:20.585484    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:28:20.613243    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:28:20.627848    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:28:20.632110    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:28:20.650359    9844 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:28:20.664651    9844 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:28:20.669348    9844 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:28:20.687403    9844 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:28:20.821513    9844 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:28:20.904288    9844 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:28:21.003782    9844 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:32:21.726771    9844 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:32:21.727209    9844 kubeadm.go:319] 
	I1217 00:32:21.728712    9844 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:32:21.737650    9844 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:32:21.737650    9844 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:32:21.737650    9844 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:32:21.737650    9844 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:32:21.738677    9844 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:32:21.738743    9844 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:32:21.738828    9844 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:32:21.738859    9844 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:32:21.738907    9844 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:32:21.738907    9844 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:32:21.738907    9844 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:32:21.738907    9844 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:32:21.738907    9844 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:32:21.738907    9844 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:32:21.739462    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:32:21.739633    9844 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:32:21.739727    9844 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:32:21.739853    9844 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:32:21.739978    9844 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:32:21.740102    9844 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:32:21.740225    9844 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:32:21.740348    9844 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:32:21.740444    9844 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:32:21.740590    9844 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:32:21.740711    9844 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:32:21.740885    9844 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:32:21.741002    9844 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:32:21.741119    9844 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:32:21.741215    9844 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:32:21.741331    9844 kubeadm.go:319] OS: Linux
	I1217 00:32:21.741530    9844 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:32:21.741530    9844 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:32:21.741530    9844 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:32:21.741530    9844 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:32:21.741530    9844 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:32:21.741530    9844 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:32:21.741530    9844 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:32:21.742084    9844 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:32:21.742170    9844 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:32:21.742342    9844 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:32:21.742594    9844 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:32:21.742808    9844 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:32:21.742932    9844 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:32:21.745332    9844 out.go:252]   - Generating certificates and keys ...
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:32:21.746256    9844 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:32:21.747265    9844 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:32:21.747265    9844 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:32:21.747265    9844 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:32:21.747265    9844 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:32:21.747265    9844 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:32:21.747265    9844 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:32:21.747814    9844 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:32:21.747814    9844 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:32:21.747814    9844 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:32:21.749872    9844 out.go:252]   - Booting up control plane ...
	I1217 00:32:21.750845    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:32:21.751006    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:32:21.751006    9844 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:32:21.751006    9844 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:32:21.751006    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:32:21.751006    9844 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:32:21.751843    9844 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:32:21.751843    9844 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:32:21.751843    9844 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:32:21.751843    9844 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:32:21.751843    9844 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000178587s
	I1217 00:32:21.751843    9844 kubeadm.go:319] 
	I1217 00:32:21.751843    9844 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:32:21.751843    9844 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:32:21.751843    9844 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:32:21.751843    9844 kubeadm.go:319] 
	I1217 00:32:21.752846    9844 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:32:21.752846    9844 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:32:21.752846    9844 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:32:21.752846    9844 kubeadm.go:319] 
	I1217 00:32:21.752846    9844 kubeadm.go:403] duration metric: took 8m4.1610114s to StartCluster
	I1217 00:32:21.752846    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:32:21.756844    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:32:22.097888    9844 cri.go:89] found id: ""
	I1217 00:32:22.097929    9844 logs.go:282] 0 containers: []
	W1217 00:32:22.097929    9844 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:32:22.097954    9844 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:32:22.102906    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:32:22.144699    9844 cri.go:89] found id: ""
	I1217 00:32:22.144699    9844 logs.go:282] 0 containers: []
	W1217 00:32:22.144699    9844 logs.go:284] No container was found matching "etcd"
	I1217 00:32:22.144699    9844 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:32:22.151723    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:32:22.192215    9844 cri.go:89] found id: ""
	I1217 00:32:22.192215    9844 logs.go:282] 0 containers: []
	W1217 00:32:22.192215    9844 logs.go:284] No container was found matching "coredns"
	I1217 00:32:22.192215    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:32:22.196868    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:32:22.237994    9844 cri.go:89] found id: ""
	I1217 00:32:22.237994    9844 logs.go:282] 0 containers: []
	W1217 00:32:22.237994    9844 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:32:22.237994    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:32:22.242021    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:32:22.284846    9844 cri.go:89] found id: ""
	I1217 00:32:22.284846    9844 logs.go:282] 0 containers: []
	W1217 00:32:22.284846    9844 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:32:22.284846    9844 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:32:22.289651    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:32:22.335573    9844 cri.go:89] found id: ""
	I1217 00:32:22.335573    9844 logs.go:282] 0 containers: []
	W1217 00:32:22.335573    9844 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:32:22.335573    9844 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:32:22.342691    9844 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:32:22.388311    9844 cri.go:89] found id: ""
	I1217 00:32:22.388311    9844 logs.go:282] 0 containers: []
	W1217 00:32:22.388311    9844 logs.go:284] No container was found matching "kindnet"
	I1217 00:32:22.388311    9844 logs.go:123] Gathering logs for container status ...
	I1217 00:32:22.388311    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:32:22.439925    9844 logs.go:123] Gathering logs for kubelet ...
	I1217 00:32:22.439925    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:32:22.520422    9844 logs.go:123] Gathering logs for dmesg ...
	I1217 00:32:22.520422    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:32:22.552009    9844 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:32:22.552009    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:32:22.808955    9844 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:32:22.799995    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.801081    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.802235    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.803476    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.804935    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:32:22.799995    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.801081    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.802235    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.803476    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:22.804935    9840 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:32:22.808955    9844 logs.go:123] Gathering logs for Docker ...
	I1217 00:32:22.808955    9844 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 00:32:22.838549    9844 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000178587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:32:22.838549    9844 out.go:285] * 
	W1217 00:32:22.838549    9844 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000178587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:32:22.838549    9844 out.go:285] * 
	W1217 00:32:22.840789    9844 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:32:22.845195    9844 out.go:203] 
	W1217 00:32:22.849356    9844 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000178587s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:32:22.849356    9844 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:32:22.849356    9844 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:32:22.855550    9844 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.655374620Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.655469027Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.655479428Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.655485628Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.655505930Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.655528632Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.655566535Z" level=info msg="Initializing buildkit"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.757795918Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.767923819Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.768036428Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.768097032Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:24:14 functional-409700 dockerd[1196]: time="2025-12-17T00:24:14.768172538Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:24:14 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:24:15 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:24:15 functional-409700 cri-dockerd[1488]: time="2025-12-17T00:24:15Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:24:15 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:32:24.786578   10000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:24.787585   10000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:24.793373   10000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:24.794358   10000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:32:24.795411   10000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000955] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001003] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000957] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001294] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001301] FS:  0000000000000000 GS:  0000000000000000
	[  +6.552843] CPU: 6 PID: 45130 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000770] RIP: 0033:0x7f064b479b20
	[  +0.000404] Code: Unable to access opcode bytes at RIP 0x7f064b479af6.
	[  +0.000644] RSP: 002b:00007fff5a087420 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000778] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000808] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000781] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001199] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001203] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001103] FS:  0000000000000000 GS:  0000000000000000
	[  +0.829363] CPU: 8 PID: 45242 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000819] RIP: 0033:0x7f596ba75b20
	[  +0.000386] Code: Unable to access opcode bytes at RIP 0x7f596ba75af6.
	[  +0.001352] RSP: 002b:00007ffd6f1412e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000829] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000806] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000803] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000826] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000811] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000815] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:32:24 up 51 min,  0 user,  load average: 0.12, 0.42, 0.72
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:32:21 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:32:21 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 00:32:21 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:21 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:22 functional-409700 kubelet[9730]: E1217 00:32:22.031733    9730 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:32:22 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:32:22 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:32:22 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 00:32:22 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:22 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:22 functional-409700 kubelet[9849]: E1217 00:32:22.779403    9849 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:32:22 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:32:22 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:32:23 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 00:32:23 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:23 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:23 functional-409700 kubelet[9873]: E1217 00:32:23.524683    9873 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:32:23 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:32:23 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:32:24 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 00:32:24 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:24 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:32:24 functional-409700 kubelet[9902]: E1217 00:32:24.279272    9902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:32:24 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:32:24 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 6 (608.2038ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 00:32:25.762018    8736 status.go:458] kubeconfig endpoint: get endpoint: "functional-409700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/StartWithProxy (519.16s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (374.74s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart
I1217 00:32:25.813365    4168 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-409700 --alsologtostderr -v=8
E1217 00:33:14.099613    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:33:41.807907    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:35:33.693252    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:38:14.102602    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-409700 --alsologtostderr -v=8: exit status 80 (6m10.5038997s)

                                                
                                                
-- stdout --
	* [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:32:25.884023   10364 out.go:360] Setting OutFile to fd 1372 ...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.926022   10364 out.go:374] Setting ErrFile to fd 1800...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.940016   10364 out.go:368] Setting JSON to false
	I1217 00:32:25.942016   10364 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3134,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:32:25.942016   10364 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:32:25.946016   10364 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:32:25.948015   10364 notify.go:221] Checking for updates...
	I1217 00:32:25.950019   10364 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:25.952018   10364 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:32:25.955015   10364 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:32:25.957015   10364 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:32:25.960017   10364 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:32:25.964016   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:25.964016   10364 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:32:26.171156   10364 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:32:26.176438   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.427526   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.406486235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.434528   10364 out.go:179] * Using the docker driver based on existing profile
	I1217 00:32:26.436524   10364 start.go:309] selected driver: docker
	I1217 00:32:26.436524   10364 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.436524   10364 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:32:26.442525   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.668518   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.649642613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.752324   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:26.752324   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:26.752324   10364 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.755825   10364 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:32:26.757701   10364 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:32:26.760609   10364 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:32:26.762036   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:26.763103   10364 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:32:26.763103   10364 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:32:26.763103   10364 cache.go:65] Caching tarball of preloaded images
	I1217 00:32:26.763399   10364 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:32:26.763399   10364 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:32:26.763399   10364 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:32:26.840670   10364 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:32:26.840729   10364 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:32:26.840729   10364 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:32:26.840729   10364 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:32:26.840729   10364 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:32:26.840729   10364 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:32:26.840729   10364 fix.go:54] fixHost starting: 
	I1217 00:32:26.848208   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:26.901821   10364 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:32:26.901821   10364 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:32:26.907276   10364 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:32:26.907373   10364 machine.go:94] provisionDockerMachine start ...
	I1217 00:32:26.910817   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:26.967003   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:26.967068   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:26.967068   10364 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:32:27.152656   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.152656   10364 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:32:27.156074   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.214234   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.214712   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.214757   10364 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:32:27.407594   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.413090   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.490102   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.490703   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.490749   10364 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:32:27.672866   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:27.672866   10364 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:32:27.672866   10364 ubuntu.go:190] setting up certificates
	I1217 00:32:27.672866   10364 provision.go:84] configureAuth start
	I1217 00:32:27.676807   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:27.732901   10364 provision.go:143] copyHostCerts
	I1217 00:32:27.733091   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:32:27.733091   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:32:27.734330   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:32:27.734382   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:32:27.735088   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1217 00:32:27.735088   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:32:27.735088   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:32:27.735728   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:32:27.736339   10364 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:32:27.847670   10364 provision.go:177] copyRemoteCerts
	I1217 00:32:27.851712   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:32:27.854410   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.907971   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:28.027015   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1217 00:32:28.027015   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:32:28.064351   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1217 00:32:28.064351   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:32:28.092479   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1217 00:32:28.092479   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:32:28.124650   10364 provision.go:87] duration metric: took 451.7801ms to configureAuth
	I1217 00:32:28.124650   10364 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:32:28.125238   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:28.128674   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.184894   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.185614   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.185614   10364 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:32:28.351273   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:32:28.351273   10364 ubuntu.go:71] root file system type: overlay
	I1217 00:32:28.351273   10364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:32:28.355630   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.410840   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.411043   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.411043   10364 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:32:28.608128   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:32:28.612284   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.672356   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.672356   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.672356   10364 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:32:28.839586   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:28.839640   10364 machine.go:97] duration metric: took 1.9322227s to provisionDockerMachine
	I1217 00:32:28.839640   10364 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:32:28.839640   10364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:32:28.845012   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:32:28.847117   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.904187   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.040693   10364 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:32:29.050158   10364 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_ID="12"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:32:29.050158   10364 command_runner.go:130] > ID=debian
	I1217 00:32:29.050158   10364 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:32:29.050158   10364 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:32:29.050158   10364 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:32:29.050158   10364 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:32:29.050158   10364 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:32:29.050833   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:32:29.050833   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /etc/ssl/certs/41682.pem
	I1217 00:32:29.051707   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:32:29.051707   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> /etc/test/nested/copy/4168/hosts
	I1217 00:32:29.055303   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:32:29.070738   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:32:29.103807   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:32:29.133625   10364 start.go:296] duration metric: took 293.9818ms for postStartSetup
	I1217 00:32:29.137970   10364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:32:29.142249   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.194718   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.311046   10364 command_runner.go:130] > 1%
	I1217 00:32:29.316279   10364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:32:29.324732   10364 command_runner.go:130] > 950G
	I1217 00:32:29.324732   10364 fix.go:56] duration metric: took 2.4839807s for fixHost
	I1217 00:32:29.324732   10364 start.go:83] releasing machines lock for "functional-409700", held for 2.4839807s
	I1217 00:32:29.330157   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:29.384617   10364 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:32:29.388675   10364 ssh_runner.go:195] Run: cat /version.json
	I1217 00:32:29.388675   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.392044   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.442282   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.464827   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.558946   10364 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1217 00:32:29.559478   10364 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:32:29.581467   10364 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:32:29.585625   10364 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:29.598125   10364 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:32:29.598125   10364 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:32:29.602648   10364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:32:29.614417   10364 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:32:29.615099   10364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:32:29.621960   10364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:32:29.646439   10364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:32:29.646439   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:29.646439   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:29.646439   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:29.668226   10364 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:32:29.672516   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:32:29.695799   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:32:29.710451   10364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:32:29.715117   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 00:32:29.723829   10364 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:32:29.723829   10364 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:32:29.737249   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.756347   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:32:29.779698   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.801679   10364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:32:29.825863   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:32:29.844752   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:32:29.865139   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:32:29.885382   10364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:32:29.900142   10364 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:32:29.904180   10364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:32:29.922078   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:30.133548   10364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:32:30.412249   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:30.412298   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:30.416670   10364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Unit]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Description=Docker Application Container Engine
	I1217 00:32:30.435945   10364 command_runner.go:130] > Documentation=https://docs.docker.com
	I1217 00:32:30.435945   10364 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1217 00:32:30.435945   10364 command_runner.go:130] > Wants=network-online.target containerd.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > Requires=docker.socket
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitBurst=3
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitIntervalSec=60
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Service]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Type=notify
	I1217 00:32:30.435945   10364 command_runner.go:130] > Restart=always
	I1217 00:32:30.435945   10364 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1217 00:32:30.435945   10364 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1217 00:32:30.435945   10364 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1217 00:32:30.435945   10364 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1217 00:32:30.435945   10364 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1217 00:32:30.435945   10364 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1217 00:32:30.435945   10364 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1217 00:32:30.435945   10364 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNOFILE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNPROC=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitCORE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1217 00:32:30.435945   10364 command_runner.go:130] > TasksMax=infinity
	I1217 00:32:30.437404   10364 command_runner.go:130] > TimeoutStartSec=0
	I1217 00:32:30.437404   10364 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1217 00:32:30.437404   10364 command_runner.go:130] > Delegate=yes
	I1217 00:32:30.437404   10364 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1217 00:32:30.437404   10364 command_runner.go:130] > KillMode=process
	I1217 00:32:30.437404   10364 command_runner.go:130] > OOMScoreAdjust=-500
	I1217 00:32:30.437404   10364 command_runner.go:130] > [Install]
	I1217 00:32:30.437404   10364 command_runner.go:130] > WantedBy=multi-user.target
	I1217 00:32:30.443833   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.468114   10364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:32:30.542786   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.567969   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:32:30.586631   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:30.606342   10364 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1217 00:32:30.611878   10364 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:32:30.618659   10364 command_runner.go:130] > /usr/bin/cri-dockerd
	I1217 00:32:30.623279   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:32:30.636760   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:32:30.661689   10364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:32:30.828747   10364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:32:30.988536   10364 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:32:30.988536   10364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:32:31.016800   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:32:31.041396   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:31.178126   10364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:32:32.195651   10364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0175164s)
	I1217 00:32:32.199801   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:32:32.224938   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:32:32.247199   10364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:32:32.275016   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:32.297360   10364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:32:32.448301   10364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:32:32.597398   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.739627   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:32:32.765463   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:32:32.790341   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.929296   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:32:33.067092   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:33.087872   10364 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:32:33.092277   10364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:32:33.102122   10364 command_runner.go:130] > Device: 0,112	Inode: 1758        Links: 1
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Modify: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Change: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.103099   10364 command_runner.go:130] >  Birth: -
	I1217 00:32:33.103099   10364 start.go:564] Will wait 60s for crictl version
	I1217 00:32:33.106627   10364 ssh_runner.go:195] Run: which crictl
	I1217 00:32:33.116038   10364 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:32:33.119921   10364 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:32:33.163697   10364 command_runner.go:130] > Version:  0.1.0
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeName:  docker
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:32:33.163697   10364 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:32:33.167790   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.207644   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.212842   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.256029   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.258896   10364 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:32:33.262892   10364 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:32:33.463377   10364 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:32:33.467155   10364 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:32:33.475542   10364 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1217 00:32:33.478907   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:33.533350   10364 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:32:33.533350   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:33.537278   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.575248   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.575248   10364 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:32:33.579121   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.614970   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.615141   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.615171   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.615171   10364 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:32:33.615171   10364 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:32:33.615349   10364 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:32:33.618510   10364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:32:34.052354   10364 command_runner.go:130] > cgroupfs
	I1217 00:32:34.052472   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:34.052529   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:34.052529   10364 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:32:34.052529   10364 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:32:34.052529   10364 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:32:34.056808   10364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:32:34.073105   10364 command_runner.go:130] > kubeadm
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubectl
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubelet
	I1217 00:32:34.073240   10364 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:32:34.077459   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:32:34.090893   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:32:34.114750   10364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:32:34.135531   10364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 00:32:34.159985   10364 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:32:34.168280   10364 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:32:34.172492   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:34.310890   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:34.700023   10364 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:32:34.700115   10364 certs.go:195] generating shared ca certs ...
	I1217 00:32:34.700115   10364 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:34.700485   10364 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:32:34.701055   10364 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:32:34.701055   10364 certs.go:257] generating profile certs ...
	I1217 00:32:34.701864   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:32:34.702120   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:32:34.702437   10364 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:32:34.702487   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:32:34.702646   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:32:34.703540   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:32:34.703598   10364 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:32:34.704137   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:32:34.704439   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:34.704970   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem -> /usr/share/ca-certificates/4168.pem
	I1217 00:32:34.705196   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /usr/share/ca-certificates/41682.pem
	I1217 00:32:34.706089   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:32:34.736497   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:32:34.769712   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:32:34.802984   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:32:34.830525   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:32:34.860563   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:32:34.889179   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:32:34.920536   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:32:34.947027   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:32:34.978500   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:32:35.008458   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:32:35.040774   10364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:32:35.063574   10364 ssh_runner.go:195] Run: openssl version
	I1217 00:32:35.083169   10364 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:32:35.087374   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.105491   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:32:35.130590   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.144343   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.192130   10364 command_runner.go:130] > b5213941
	I1217 00:32:35.199882   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:32:35.220625   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.238544   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:32:35.259065   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266549   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266638   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.271223   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.315698   10364 command_runner.go:130] > 51391683
	I1217 00:32:35.322687   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:32:35.339650   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.358290   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:32:35.374891   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.387660   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.431595   10364 command_runner.go:130] > 3ec20f2e
	I1217 00:32:35.436891   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:32:35.453526   10364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:32:35.462183   10364 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: 2025-12-17 00:28:21.018933524 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Modify: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Change: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] >  Birth: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.466206   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:32:35.509324   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.514900   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:32:35.558615   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.563444   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:32:35.608112   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.612517   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:32:35.657914   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.662797   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:32:35.707243   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.713694   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:32:35.760477   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.761002   10364 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:35.764353   10364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:32:35.796231   10364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:32:35.810900   10364 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:32:35.810996   10364 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:32:35.810996   10364 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:32:35.815318   10364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:32:35.828811   10364 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:32:35.832840   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:35.889236   10364 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-409700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.889236   10364 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-409700" cluster setting kubeconfig missing "functional-409700" context setting]
	I1217 00:32:35.889236   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.906814   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.907042   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:35.908414   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:32:35.912354   10364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:32:35.931570   10364 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 00:32:35.931672   10364 kubeadm.go:602] duration metric: took 120.6751ms to restartPrimaryControlPlane
	I1217 00:32:35.931672   10364 kubeadm.go:403] duration metric: took 170.6688ms to StartCluster
	I1217 00:32:35.931672   10364 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.931672   10364 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.932861   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.933736   10364 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 00:32:35.933736   10364 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:32:35.933901   10364 addons.go:70] Setting storage-provisioner=true in profile "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:239] Setting addon storage-provisioner=true in "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:70] Setting default-storageclass=true in profile "functional-409700"
	I1217 00:32:35.934051   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:35.934098   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:35.934098   10364 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-409700"
	I1217 00:32:35.936531   10364 out.go:179] * Verifying Kubernetes components...
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.944620   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:36.000654   10364 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:36.002654   10364 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.002654   10364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:32:36.005647   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.010648   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:36.011652   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:36.012648   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:36.012648   10364 addons.go:239] Setting addon default-storageclass=true in "functional-409700"
	I1217 00:32:36.012648   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:36.019655   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:36.056654   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.069645   10364 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.069645   10364 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:32:36.072658   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.098645   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:36.122646   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.187680   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.202921   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.260682   10364 node_ready.go:35] waiting up to 6m0s for node "functional-409700" to be "Ready" ...
	I1217 00:32:36.260849   10364 type.go:168] "Request Body" body=""
	I1217 00:32:36.261061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:36.264195   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:36.265260   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.336693   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.340106   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.340627   10364 retry.go:31] will retry after 202.939607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.388976   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.393288   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.393853   10364 retry.go:31] will retry after 227.289762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.548879   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.622050   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.626260   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626260   10364 retry.go:31] will retry after 395.113457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626489   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.698520   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.702459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.702459   10364 retry.go:31] will retry after 468.39049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.026805   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.111151   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.116224   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.116762   10364 retry.go:31] will retry after 792.119284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.177175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.249858   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.255359   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.255359   10364 retry.go:31] will retry after 596.241339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.265542   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:37.265542   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:37.267933   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:37.856198   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.913554   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.941640   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.944331   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.944331   10364 retry.go:31] will retry after 571.98292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.986334   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.989310   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.989310   10364 retry.go:31] will retry after 625.589854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.268385   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:38.268385   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:38.271420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:38.521873   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:38.599872   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.599872   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.599872   10364 retry.go:31] will retry after 1.272749266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.621006   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:38.701213   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.701287   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.701287   10364 retry.go:31] will retry after 729.524766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.272125   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:39.272125   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:39.274907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:39.436175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:39.531183   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.531183   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.531183   10364 retry.go:31] will retry after 993.07118ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.877780   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:39.947906   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.950459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.950459   10364 retry.go:31] will retry after 981.929326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.275982   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:40.275982   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:40.278602   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:40.529721   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:40.604194   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:40.610090   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.610090   10364 retry.go:31] will retry after 3.313570586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.937823   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:41.010101   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:41.013448   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.013448   10364 retry.go:31] will retry after 3.983327016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.279217   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:41.279217   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:41.282049   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:42.282642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:42.282642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:42.285895   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.285957   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:43.285957   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:43.289436   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.928516   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:44.010824   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:44.016536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.016536   10364 retry.go:31] will retry after 3.387443088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.290770   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:44.290770   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:44.293999   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:45.002652   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:45.076704   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:45.080905   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.080905   10364 retry.go:31] will retry after 2.289915246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.294211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:45.294211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:45.297045   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:46.297784   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:46.297784   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.300989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:32:46.300989   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:46.300989   10364 type.go:168] "Request Body" body=""
	I1217 00:32:46.300989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.304308   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.305471   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:47.305471   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:47.308634   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.375936   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:47.409078   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:47.458764   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.458804   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.458804   10364 retry.go:31] will retry after 7.569688135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.484927   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.488464   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.488464   10364 retry.go:31] will retry after 9.157991048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:48.309180   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:48.309180   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:48.312403   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:49.312469   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:49.312469   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:49.315488   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:50.316234   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:50.316234   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:50.319889   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:51.320680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:51.320680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:51.324928   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:32:52.325755   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:52.325755   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:52.328987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:53.329277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:53.329277   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:53.332508   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:54.333122   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:54.333449   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:54.337390   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:55.034235   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:55.110067   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:55.114541   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.114568   10364 retry.go:31] will retry after 11.854567632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.338017   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:55.338017   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:55.341093   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:56.341403   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:56.341403   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.344366   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:32:56.344366   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:56.344366   10364 type.go:168] "Request Body" body=""
	I1217 00:32:56.344898   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.347007   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:56.652443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:56.739536   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:56.739536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:56.739536   10364 retry.go:31] will retry after 10.780280137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:57.347379   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:57.347379   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:57.350807   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:58.351069   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:58.351069   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:58.354096   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:59.354451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:59.354451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:59.357775   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:00.357853   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:00.357853   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:00.362050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:01.362288   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:01.362722   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:01.365594   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:02.365849   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:02.366254   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:02.369208   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:03.369619   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:03.369619   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:03.373087   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:04.373596   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:04.373596   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:04.376267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:05.376901   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:05.376901   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:05.380341   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:06.380779   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:06.380779   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.384486   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:06.384486   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:06.384486   10364 type.go:168] "Request Body" body=""
	I1217 00:33:06.384486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.386883   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:06.975138   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:07.047365   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.053212   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.053212   10364 retry.go:31] will retry after 9.4400792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.388016   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:07.388016   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:07.391682   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:07.525003   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:07.600422   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.604097   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.604097   10364 retry.go:31] will retry after 21.608180779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:08.392667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:08.392667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:08.395310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:09.395626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:09.395626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:09.400417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:10.400757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:10.400757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:10.403934   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:11.404855   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:11.404855   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:11.407439   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:12.407525   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:12.407525   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:12.410864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:13.411229   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:13.411229   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:13.414667   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:14.414815   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:14.414815   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:14.417914   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:15.418400   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:15.418400   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:15.421658   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:16.421803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:16.421803   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.424468   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:16.424468   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:16.425000   10364 type.go:168] "Request Body" body=""
	I1217 00:33:16.425000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.427532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:16.499443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:16.577484   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:16.582973   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:16.583014   10364 retry.go:31] will retry after 31.220452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:17.427856   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:17.427856   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:17.430661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:18.431189   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:18.431189   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:18.434303   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:19.434667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:19.434667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:19.437774   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:20.438018   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:20.438018   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:20.441284   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:21.442005   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:21.442005   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:21.445477   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:22.446517   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:22.446517   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:22.451991   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:33:23.452224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:23.452224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:23.455297   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:24.455662   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:24.455662   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:24.458123   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:25.458634   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:25.458634   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:25.461576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:26.462089   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:26.462563   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.465489   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:26.465489   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:26.465647   10364 type.go:168] "Request Body" body=""
	I1217 00:33:26.465647   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.468381   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:27.469289   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:27.469617   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:27.472277   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:28.472725   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:28.473201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:28.476219   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:29.218035   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:29.290496   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:29.295368   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.295368   10364 retry.go:31] will retry after 28.200848873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.476644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:29.476644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:29.479582   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:30.480382   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:30.480382   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:30.483362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:31.484451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:31.484451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:31.488344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:32.488579   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:32.488579   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:32.491919   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:33.492204   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:33.492204   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:33.494785   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:34.495401   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:34.495401   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:34.499412   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:35.499565   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:35.500315   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:35.503299   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:36.504300   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:36.504300   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.507870   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:36.507973   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:36.508033   10364 type.go:168] "Request Body" body=""
	I1217 00:33:36.508113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.510973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:37.511257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:37.511257   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:37.514688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:38.514936   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:38.514936   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:38.518386   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:39.518923   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:39.518923   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:39.520922   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:33:40.521680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:40.521680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:40.524367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:41.525837   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:41.526267   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:41.528903   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:42.529201   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:42.529201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:42.531842   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:43.532127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:43.532127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:43.534820   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:44.536381   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:44.536381   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:44.539631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:45.540548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:45.540548   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:45.543978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:46.544552   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:46.544552   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.547995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:46.547995   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:46.547995   10364 type.go:168] "Request Body" body=""
	I1217 00:33:46.547995   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.550843   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:47.551203   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:47.551203   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:47.554480   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:47.809190   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:47.891444   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:47.895455   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:47.895455   10364 retry.go:31] will retry after 48.235338214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:48.554744   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:48.554744   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:48.557563   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:49.558144   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:49.558144   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:49.560984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:50.561573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:50.561999   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:50.564681   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:51.564893   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:51.565218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:51.567822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:52.568697   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:52.568697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:52.572043   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:53.572367   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:53.572367   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:53.575543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:54.576655   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:54.576655   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:54.579628   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:55.580688   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:55.580688   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:55.583829   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:56.585061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:56.585061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.589344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:56.589344   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:56.589879   10364 type.go:168] "Request Body" body=""
	I1217 00:33:56.589987   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.592329   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:57.501146   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:57.569298   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:57.571601   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.571601   10364 retry.go:31] will retry after 30.590824936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.593179   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:57.593179   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:57.595184   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:58.596116   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:58.596302   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:58.598982   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:59.599603   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:59.599603   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:59.602661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:00.602875   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:00.603290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:00.606460   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:01.607309   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:01.607677   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:01.609972   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:02.611301   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:02.611301   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:02.614599   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:03.614800   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:03.614800   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:03.618177   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:04.618602   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:04.618996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:04.624198   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:05.625646   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:05.625646   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:05.629762   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:06.630421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:06.630421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.633232   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:06.633232   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:06.633809   10364 type.go:168] "Request Body" body=""
	I1217 00:34:06.633809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.638868   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:07.639683   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:07.639683   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:07.643176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:08.643409   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:08.643409   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:08.646509   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:09.647445   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:09.647445   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:09.650342   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:10.650843   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:10.651408   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:10.653984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:11.654782   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:11.654782   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:11.660510   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:12.661264   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:12.661264   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:12.664725   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:13.665643   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:13.665643   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:13.668534   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:14.669351   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:14.669351   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:14.673188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:15.673306   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:15.673709   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:15.675803   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:16.676778   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:16.676778   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.679773   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:16.679872   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:16.679999   10364 type.go:168] "Request Body" body=""
	I1217 00:34:16.680102   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.682768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:17.683817   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:17.683817   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:17.686822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:18.687027   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:18.687027   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:18.690241   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:19.690694   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:19.690694   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:19.693877   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:20.694298   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:20.694605   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:20.697314   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:21.697742   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:21.697742   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:21.700603   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:22.701210   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:22.701210   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:22.704640   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:23.705172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:23.705172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:23.707560   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:24.708954   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:24.708954   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:24.712011   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:25.712539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:25.712539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:25.717818   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:26.717996   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:26.717996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.721620   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:26.721620   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:26.721620   10364 type.go:168] "Request Body" body=""
	I1217 00:34:26.721620   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.725519   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:27.726686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:27.726686   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:27.729112   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:28.168229   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:34:28.439129   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439129   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439671   10364 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:28.730022   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:28.730022   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:28.732579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:29.733316   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:29.733316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:29.737180   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:30.737898   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:30.738218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:30.740633   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:31.741637   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:31.741637   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:31.744968   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:32.745244   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:32.745244   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:32.748688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:33.749681   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:33.749681   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:33.753864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:34.754458   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:34.754458   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:34.757550   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:35.757989   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:35.757989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:35.762318   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:36.136043   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:34:36.218441   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:36.231181   10364 out.go:179] * Enabled addons: 
	I1217 00:34:36.235148   10364 addons.go:530] duration metric: took 2m0.3003648s for enable addons: enabled=[]
	I1217 00:34:36.762736   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:36.762736   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.765107   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:36.765107   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:36.765107   10364 type.go:168] "Request Body" body=""
	I1217 00:34:36.765638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.768239   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:37.768638   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:37.768638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:37.772263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:38.772833   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:38.772833   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:38.775690   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:39.776860   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:39.776860   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:39.779543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:40.779907   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:40.779907   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:40.782631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:41.783358   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:41.783809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:41.787117   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:42.787421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:42.787421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:42.790478   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:43.791393   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:43.791393   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:43.794768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:44.795719   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:44.795719   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:44.799050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:45.799750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:45.800118   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:45.802333   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:46.802808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:46.802808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.806272   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:46.806272   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:46.806272   10364 type.go:168] "Request Body" body=""
	I1217 00:34:46.806272   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.808808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:47.809106   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:47.809106   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:47.812072   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:48.812377   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:48.812377   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:48.815804   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:49.816160   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:49.816160   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:49.819073   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:50.819687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:50.819687   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:50.824808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:51.825256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:51.825256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:51.827149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:34:52.828172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:52.828172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:52.831194   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:53.831502   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:53.831502   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:53.835949   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:54.836430   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:54.836430   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:54.840704   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:55.840945   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:55.840945   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:55.844273   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:56.844698   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:56.844774   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.847718   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:56.847718   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:56.847718   10364 type.go:168] "Request Body" body=""
	I1217 00:34:56.847718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.850361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:57.850724   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:57.850724   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:57.853992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:58.854839   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:58.854839   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:58.857985   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:59.858686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:59.859048   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:59.863493   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:00.863731   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:00.863731   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:00.867009   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:01.867548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:01.867986   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:01.870485   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:02.870682   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:02.870682   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:02.874134   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:03.874927   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:03.874927   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:03.877992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:04.878757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:04.878757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:04.882012   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:05.882985   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:05.882985   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:05.886320   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:06.887395   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:06.887395   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.890772   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:06.890844   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:06.890844   10364 type.go:168] "Request Body" body=""
	I1217 00:35:06.890844   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.892912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:07.893541   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:07.893541   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:07.897243   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:08.897423   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:08.897423   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:08.901955   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:09.902222   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:09.902222   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:09.905347   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:10.906346   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:10.906346   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:10.909589   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:11.910013   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:11.910424   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:11.913496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:12.913792   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:12.913792   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:12.917334   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:13.917794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:13.917794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:13.920911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:14.921451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:14.921902   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:14.924686   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:15.925539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:15.925539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:15.928618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:16.928871   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:16.928871   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.932364   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:16.932364   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:16.932364   10364 type.go:168] "Request Body" body=""
	I1217 00:35:16.932364   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.935267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:17.936075   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:17.936075   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:17.939252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:18.940390   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:18.940390   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:18.943332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:19.943802   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:19.943802   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:19.946902   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:20.947509   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:20.947882   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:20.949988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:21.950644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:21.950644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:21.954065   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:22.954236   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:22.954236   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:22.958266   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:23.958794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:23.959062   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:23.961451   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:24.962012   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:24.962012   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:24.965125   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:25.965439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:25.965439   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:25.968637   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:26.968810   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:26.968810   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.971892   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:26.971961   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:26.972008   10364 type.go:168] "Request Body" body=""
	I1217 00:35:26.972008   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.977052   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:35:27.977730   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:27.977730   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:27.980941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:28.981406   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:28.981406   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:28.984099   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:29.985140   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:29.985452   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:29.988385   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:30.989318   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:30.989318   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:30.992251   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:31.993148   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:31.993515   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:31.996483   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:32.996803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:32.997153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:32.999821   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:33.999930   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:33.999930   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:34.003148   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:35.003410   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:35.003410   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:35.006455   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:36.008349   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:36.008349   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:36.010952   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:37.011100   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:37.011100   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.014149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:37.014149   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:37.014149   10364 type.go:168] "Request Body" body=""
	I1217 00:35:37.014678   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.016502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:35:38.017464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:38.017464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:38.020305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:39.020641   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:39.020641   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:39.023532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:40.024042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:40.024042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:40.027707   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:41.027942   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:41.027942   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:41.031346   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:42.032292   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:42.032292   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:42.035463   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:43.035799   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:43.036298   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:43.039139   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:44.039453   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:44.039453   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:44.042907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:45.043589   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:45.043589   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:45.046766   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:46.047648   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:46.047648   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:46.051224   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:47.051642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:47.051642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.054716   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:47.054716   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:47.054716   10364 type.go:168] "Request Body" body=""
	I1217 00:35:47.054716   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.056987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:48.058345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:48.058345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:48.061555   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:49.061851   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:49.061851   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:49.065062   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:50.065656   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:50.065933   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:50.068127   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:51.068865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:51.069263   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:51.071479   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:52.072199   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:52.072199   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:52.075414   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:53.076211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:53.076211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:53.079310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:54.079644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:54.079644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:54.083395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:55.083663   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:55.083663   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:55.086632   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:56.087097   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:56.087494   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:56.091591   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:57.091913   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:57.092314   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.095048   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:35:57.095048   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:57.095048   10364 type.go:168] "Request Body" body=""
	I1217 00:35:57.095640   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.098264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:58.098993   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:58.098993   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:58.101747   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:59.103113   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:59.103113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:59.105884   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:00.107028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:00.107028   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:00.109881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:01.110650   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:01.110650   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:01.114650   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:02.114915   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:02.114915   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:02.118186   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:03.118580   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:03.118580   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:03.121988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:04.123025   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:04.123025   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:04.126587   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:05.127042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:05.127451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:05.132256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:06.132687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:06.133104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:06.135375   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:07.137054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:07.137054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.140223   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:07.140223   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:07.140223   10364 type.go:168] "Request Body" body=""
	I1217 00:36:07.140223   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.142965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:08.143629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:08.143629   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:08.147215   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:09.147522   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:09.147522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:09.150564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:10.151061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:10.151061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:10.153608   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:11.154626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:11.154626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:11.157406   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:12.158277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:12.158752   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:12.162911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:13.163269   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:13.163269   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:13.166264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:14.166990   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:14.166990   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:14.171561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:15.171912   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:15.171912   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:15.175056   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:16.176256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:16.176256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:16.179133   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:17.179808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:17.179808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.182925   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:17.182976   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:17.183085   10364 type.go:168] "Request Body" body=""
	I1217 00:36:17.183154   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.186098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:18.186373   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:18.186373   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:18.188978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:19.189978   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:19.189978   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:19.193521   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:20.193758   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:20.194053   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:20.196502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:21.196916   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:21.196916   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:21.200034   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:22.200545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:22.200545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:22.204008   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:23.205276   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:23.205569   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:23.207867   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:24.208451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:24.208451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:24.211642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:25.212042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:25.212042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:25.214973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:26.215279   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:26.215279   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:26.218537   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:27.219034   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:27.219034   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.221530   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:27.221530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:27.222255   10364 type.go:168] "Request Body" body=""
	I1217 00:36:27.222319   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.225150   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:28.225829   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:28.225829   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:28.229281   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:29.229629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:29.229922   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:29.232417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:30.233433   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:30.233433   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:30.236676   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:31.237185   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:31.237185   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:31.240270   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:32.240968   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:32.241316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:32.244151   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:33.244415   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:33.244415   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:33.248305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:34.248592   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:34.248592   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:34.252121   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:35.252241   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:35.252241   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:35.254173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:36.254586   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:36.254586   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:36.257572   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:37.258337   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:37.258337   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.261475   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:37.261475   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:37.262206   10364 type.go:168] "Request Body" body=""
	I1217 00:36:37.262532   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.264961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:38.265631   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:38.265854   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:38.268561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:39.269290   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:39.269290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:39.271879   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:40.272273   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:40.272273   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:40.275242   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:41.276205   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:41.276623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:41.278866   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:42.279206   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:42.279206   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:42.282173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:43.282751   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:43.282751   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:43.285875   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:44.286756   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:44.287077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:44.289831   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:45.290159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:45.290159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:45.293298   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:46.294545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:46.294545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:46.297578   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:47.297935   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:47.297935   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.300692   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:47.300692   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:47.300692   10364 type.go:168] "Request Body" body=""
	I1217 00:36:47.300692   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.302635   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:48.303208   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:48.303208   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:48.306418   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:49.306667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:49.307130   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:49.309815   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:50.310768   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:50.310768   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:50.313618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:51.314224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:51.314224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:51.316809   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:52.317523   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:52.317523   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:52.322067   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:53.322359   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:53.322359   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:53.325176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:54.325549   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:54.325549   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:54.328395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:55.328984   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:55.329339   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:55.334171   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:56.334464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:56.334464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:56.337612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:57.337960   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:57.337960   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.340932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:57.341462   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:57.341593   10364 type.go:168] "Request Body" body=""
	I1217 00:36:57.341654   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.344564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:58.345573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:58.345573   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:58.348987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:59.349186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:59.349186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:59.352680   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:00.353127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:00.353127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:00.355791   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:01.356152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:01.356152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:01.360722   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:02.361585   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:02.362214   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:02.364765   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:03.365485   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:03.365485   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:03.368349   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:04.368821   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:04.368821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:04.371965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:05.372332   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:05.372332   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:05.375376   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:06.376031   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:06.376031   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:06.378850   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:07.380334   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:07.380334   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.383178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:07.383178   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:07.383178   10364 type.go:168] "Request Body" body=""
	I1217 00:37:07.383178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.386449   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:08.387594   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:08.388059   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:08.391028   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:09.391186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:09.391186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:09.394448   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:10.394971   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:10.394971   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:10.399668   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:11.400389   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:11.400389   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:11.403573   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:12.404531   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:12.404531   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:12.407846   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:13.408153   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:13.408153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:13.411907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:14.412175   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:14.412175   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:14.415697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:15.416228   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:15.416228   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:15.419897   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:16.420794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:16.420794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:16.424642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:17.424997   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:17.424997   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.428835   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:17.428983   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:17.428983   10364 type.go:168] "Request Body" body=""
	I1217 00:37:17.428983   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.432188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:18.433366   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:18.433366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:18.437105   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:19.437417   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:19.437866   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:19.443541   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:37:20.444729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:20.444729   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:20.447421   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:21.447798   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:21.447798   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:21.450995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:22.451672   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:22.451672   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:22.454367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:23.455345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:23.455345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:23.458961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:24.459152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:24.459152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:24.462362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:25.462863   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:25.462863   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:25.465098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:26.465439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:26.465821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:26.468832   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:27.469064   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:27.469454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.472358   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:27.472422   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:27.472536   10364 type.go:168] "Request Body" body=""
	I1217 00:37:27.472615   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.475175   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:28.475953   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:28.475953   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:28.479074   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:29.479701   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:29.479701   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:29.482529   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:30.483219   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:30.483219   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:30.486254   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:31.487104   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:31.487104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:31.489733   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:32.490240   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:32.490767   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:32.493579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:33.493807   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:33.494211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:33.497178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:34.497955   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:34.497955   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:34.501263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:35.501483   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:35.501483   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:35.504417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:36.504622   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:36.504622   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:36.508593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:37.509653   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:37.509653   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.512288   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:37.512288   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:37.512424   10364 type.go:168] "Request Body" body=""
	I1217 00:37:37.512522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.514595   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:38.514845   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:38.514845   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:38.517717   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:39.518411   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:39.518411   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:39.520864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:40.521889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:40.521889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:40.525103   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:41.525419   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:41.525419   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:41.528361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:42.528733   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:42.529149   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:42.532111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:43.532896   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:43.532896   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:43.536252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:44.536867   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:44.536867   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:44.540157   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:45.540486   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:45.540486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:45.543711   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:46.543879   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:46.543879   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:46.546377   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:47.546832   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:47.546832   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.550543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:47.550543   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:47.550643   10364 type.go:168] "Request Body" body=""
	I1217 00:37:47.550786   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.552960   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:48.553202   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:48.553202   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:48.558015   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:49.559371   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:49.559371   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:49.562548   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:50.562966   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:50.562966   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:50.565800   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:51.566293   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:51.566623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:51.569597   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:52.570511   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:52.570511   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:52.573392   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:53.573965   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:53.573965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:53.576340   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:54.577062   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:54.577463   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:54.579836   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:55.580473   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:55.580473   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:55.583734   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:56.584454   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:56.584454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:56.587256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:57.588397   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:57.588397   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.593527   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1217 00:37:57.593527   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:57.593527   10364 type.go:168] "Request Body" body=""
	I1217 00:37:57.593527   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.597825   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:58.598550   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:58.598550   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:58.602122   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:59.602444   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:59.602444   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:59.605501   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:00.606096   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:00.606096   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:00.608989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:01.609865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:01.609965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:01.613038   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:02.613818   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:02.614067   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:02.617196   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:03.617950   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:03.618366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:03.621156   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:04.621587   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:04.621587   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:04.624616   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:05.625123   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:05.625123   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:05.627780   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:06.628169   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:06.628602   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:06.632684   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:38:07.633450   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:07.633450   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.636697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:07.636697   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:07.636697   10364 type.go:168] "Request Body" body=""
	I1217 00:38:07.636697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.638671   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:08.639000   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:08.639000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:08.642420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:09.642718   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:09.642718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:09.645881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:10.646391   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:10.646391   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:10.649653   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:11.650077   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:11.650077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:11.653855   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:12.654508   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:12.654508   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:12.657918   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:13.658238   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:13.658238   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:13.661446   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:14.661684   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:14.661684   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:14.664655   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:15.665257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:15.665578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:15.672111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1217 00:38:16.672363   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:16.672363   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:16.675593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:17.676054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:17.676054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.679454   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:17.679454   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:17.679454   10364 type.go:168] "Request Body" body=""
	I1217 00:38:17.679454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.681452   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:18.682087   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:18.682087   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:18.685399   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:19.686028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:19.686535   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:19.689161   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:20.689948   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:20.690239   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:20.692554   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:21.693716   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:21.694009   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:21.696661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:22.697780   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:22.697780   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:22.700917   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:23.702225   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:23.702225   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:23.705612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:24.706750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:24.706750   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:24.710496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:25.710729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:25.711065   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:25.713912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:26.714178   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:26.714178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:26.718058   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:27.718245   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:27.718578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.721305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:27.721375   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:27.721441   10364 type.go:168] "Request Body" body=""
	I1217 00:38:27.721441   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.723332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:28.723805   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:28.724207   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:28.727033   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:29.727723   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:29.727723   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:29.730941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:30.731355   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:30.731355   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:30.734083   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:31.734645   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:31.734645   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:31.737932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:32.738159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:32.738159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:32.741332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:33.741889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:33.741889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:33.744576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:34.745133   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:34.745546   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:34.747888   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:35.749177   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:35.749177   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:35.751796   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:36.264530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 00:38:36.264530   10364 node_ready.go:38] duration metric: took 6m0.0004133s for node "functional-409700" to be "Ready" ...
	I1217 00:38:36.268017   10364 out.go:203] 
	W1217 00:38:36.270772   10364 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:38:36.270772   10364 out.go:285] * 
	* 
	W1217 00:38:36.272556   10364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:38:36.275101   10364 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:676: failed to soft start minikube. args "out/minikube-windows-amd64.exe start -p functional-409700 --alsologtostderr -v=8": exit status 80
functional_test.go:678: soft start took 6m11.2448189s for "functional-409700" cluster.
I1217 00:38:37.061960    4168 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (622.3642ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.4114682s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                ARGS                                                                                 │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-045600 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                            │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ service        │ functional-045600 service hello-node --url --format={{.IP}}                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ ssh            │ functional-045600 ssh sudo cat /etc/test/nested/copy/4168/hosts                                                                                                     │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ docker-env     │ functional-045600 docker-env                                                                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ dashboard      │ --url --port 36195 -p functional-045600 --alsologtostderr -v=1                                                                                                      │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ service        │ functional-045600 service hello-node --url                                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ cp             │ functional-045600 cp testdata\cp-test.txt /home/docker/cp-test.txt                                                                                                  │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /home/docker/cp-test.txt                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ cp             │ functional-045600 cp functional-045600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2737548863\001\cp-test.txt │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /home/docker/cp-test.txt                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ cp             │ functional-045600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                           │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format short --alsologtostderr                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format json --alsologtostderr                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format table --alsologtostderr                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format yaml --alsologtostderr                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh pgrep buildkitd                                                                                                                               │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │                     │
	│ image          │ functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls                                                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ delete         │ -p functional-045600                                                                                                                                                │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │ 17 Dec 25 00:23 UTC │
	│ start          │ -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-409700 --alsologtostderr -v=8                                                                                                                         │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:32:25
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:32:25.884023   10364 out.go:360] Setting OutFile to fd 1372 ...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.926022   10364 out.go:374] Setting ErrFile to fd 1800...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.940016   10364 out.go:368] Setting JSON to false
	I1217 00:32:25.942016   10364 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3134,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:32:25.942016   10364 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:32:25.946016   10364 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:32:25.948015   10364 notify.go:221] Checking for updates...
	I1217 00:32:25.950019   10364 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:25.952018   10364 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:32:25.955015   10364 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:32:25.957015   10364 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:32:25.960017   10364 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:32:25.964016   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:25.964016   10364 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:32:26.171156   10364 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:32:26.176438   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.427526   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.406486235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.434528   10364 out.go:179] * Using the docker driver based on existing profile
	I1217 00:32:26.436524   10364 start.go:309] selected driver: docker
	I1217 00:32:26.436524   10364 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.436524   10364 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:32:26.442525   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.668518   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.649642613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.752324   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:26.752324   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:26.752324   10364 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.755825   10364 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:32:26.757701   10364 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:32:26.760609   10364 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:32:26.762036   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:26.763103   10364 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:32:26.763103   10364 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:32:26.763103   10364 cache.go:65] Caching tarball of preloaded images
	I1217 00:32:26.763399   10364 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:32:26.763399   10364 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:32:26.763399   10364 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:32:26.840670   10364 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:32:26.840729   10364 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:32:26.840729   10364 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:32:26.840729   10364 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:32:26.840729   10364 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:32:26.840729   10364 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:32:26.840729   10364 fix.go:54] fixHost starting: 
	I1217 00:32:26.848208   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:26.901821   10364 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:32:26.901821   10364 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:32:26.907276   10364 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:32:26.907373   10364 machine.go:94] provisionDockerMachine start ...
	I1217 00:32:26.910817   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:26.967003   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:26.967068   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:26.967068   10364 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:32:27.152656   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.152656   10364 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:32:27.156074   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.214234   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.214712   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.214757   10364 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:32:27.407594   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.413090   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.490102   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.490703   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.490749   10364 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:32:27.672866   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:27.672866   10364 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:32:27.672866   10364 ubuntu.go:190] setting up certificates
	I1217 00:32:27.672866   10364 provision.go:84] configureAuth start
	I1217 00:32:27.676807   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:27.732901   10364 provision.go:143] copyHostCerts
	I1217 00:32:27.733091   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:32:27.733091   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:32:27.734330   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:32:27.734382   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:32:27.735088   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1217 00:32:27.735088   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:32:27.735088   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:32:27.735728   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:32:27.736339   10364 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:32:27.847670   10364 provision.go:177] copyRemoteCerts
	I1217 00:32:27.851712   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:32:27.854410   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.907971   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:28.027015   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1217 00:32:28.027015   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:32:28.064351   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1217 00:32:28.064351   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:32:28.092479   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1217 00:32:28.092479   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:32:28.124650   10364 provision.go:87] duration metric: took 451.7801ms to configureAuth
	I1217 00:32:28.124650   10364 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:32:28.125238   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:28.128674   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.184894   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.185614   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.185614   10364 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:32:28.351273   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:32:28.351273   10364 ubuntu.go:71] root file system type: overlay
	I1217 00:32:28.351273   10364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:32:28.355630   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.410840   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.411043   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.411043   10364 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:32:28.608128   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:32:28.612284   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.672356   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.672356   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.672356   10364 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:32:28.839586   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:28.839640   10364 machine.go:97] duration metric: took 1.9322227s to provisionDockerMachine
	I1217 00:32:28.839640   10364 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:32:28.839640   10364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:32:28.845012   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:32:28.847117   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.904187   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.040693   10364 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:32:29.050158   10364 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_ID="12"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:32:29.050158   10364 command_runner.go:130] > ID=debian
	I1217 00:32:29.050158   10364 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:32:29.050158   10364 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:32:29.050158   10364 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:32:29.050158   10364 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:32:29.050158   10364 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:32:29.050833   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:32:29.050833   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /etc/ssl/certs/41682.pem
	I1217 00:32:29.051707   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:32:29.051707   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> /etc/test/nested/copy/4168/hosts
	I1217 00:32:29.055303   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:32:29.070738   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:32:29.103807   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:32:29.133625   10364 start.go:296] duration metric: took 293.9818ms for postStartSetup
	I1217 00:32:29.137970   10364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:32:29.142249   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.194718   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.311046   10364 command_runner.go:130] > 1%
	I1217 00:32:29.316279   10364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:32:29.324732   10364 command_runner.go:130] > 950G
	I1217 00:32:29.324732   10364 fix.go:56] duration metric: took 2.4839807s for fixHost
	I1217 00:32:29.324732   10364 start.go:83] releasing machines lock for "functional-409700", held for 2.4839807s
	I1217 00:32:29.330157   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:29.384617   10364 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:32:29.388675   10364 ssh_runner.go:195] Run: cat /version.json
	I1217 00:32:29.388675   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.392044   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.442282   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.464827   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.558946   10364 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1217 00:32:29.559478   10364 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:32:29.581467   10364 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:32:29.585625   10364 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:29.598125   10364 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:32:29.598125   10364 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:32:29.602648   10364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:32:29.614417   10364 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:32:29.615099   10364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:32:29.621960   10364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:32:29.646439   10364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:32:29.646439   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:29.646439   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:29.646439   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:29.668226   10364 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:32:29.672516   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:32:29.695799   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:32:29.710451   10364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:32:29.715117   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 00:32:29.723829   10364 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:32:29.723829   10364 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:32:29.737249   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.756347   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:32:29.779698   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.801679   10364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:32:29.825863   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:32:29.844752   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:32:29.865139   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:32:29.885382   10364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:32:29.900142   10364 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:32:29.904180   10364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:32:29.922078   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:30.133548   10364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:32:30.412249   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:30.412298   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:30.416670   10364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Unit]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Description=Docker Application Container Engine
	I1217 00:32:30.435945   10364 command_runner.go:130] > Documentation=https://docs.docker.com
	I1217 00:32:30.435945   10364 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1217 00:32:30.435945   10364 command_runner.go:130] > Wants=network-online.target containerd.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > Requires=docker.socket
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitBurst=3
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitIntervalSec=60
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Service]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Type=notify
	I1217 00:32:30.435945   10364 command_runner.go:130] > Restart=always
	I1217 00:32:30.435945   10364 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1217 00:32:30.435945   10364 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1217 00:32:30.435945   10364 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1217 00:32:30.435945   10364 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1217 00:32:30.435945   10364 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1217 00:32:30.435945   10364 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1217 00:32:30.435945   10364 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1217 00:32:30.435945   10364 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNOFILE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNPROC=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitCORE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1217 00:32:30.435945   10364 command_runner.go:130] > TasksMax=infinity
	I1217 00:32:30.437404   10364 command_runner.go:130] > TimeoutStartSec=0
	I1217 00:32:30.437404   10364 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1217 00:32:30.437404   10364 command_runner.go:130] > Delegate=yes
	I1217 00:32:30.437404   10364 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1217 00:32:30.437404   10364 command_runner.go:130] > KillMode=process
	I1217 00:32:30.437404   10364 command_runner.go:130] > OOMScoreAdjust=-500
	I1217 00:32:30.437404   10364 command_runner.go:130] > [Install]
	I1217 00:32:30.437404   10364 command_runner.go:130] > WantedBy=multi-user.target
	I1217 00:32:30.443833   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.468114   10364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:32:30.542786   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.567969   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:32:30.586631   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:30.606342   10364 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1217 00:32:30.611878   10364 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:32:30.618659   10364 command_runner.go:130] > /usr/bin/cri-dockerd
	I1217 00:32:30.623279   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:32:30.636760   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:32:30.661689   10364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:32:30.828747   10364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:32:30.988536   10364 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:32:30.988536   10364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:32:31.016800   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:32:31.041396   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:31.178126   10364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:32:32.195651   10364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0175164s)
	I1217 00:32:32.199801   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:32:32.224938   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:32:32.247199   10364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:32:32.275016   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:32.297360   10364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:32:32.448301   10364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:32:32.597398   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.739627   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:32:32.765463   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:32:32.790341   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.929296   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:32:33.067092   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:33.087872   10364 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:32:33.092277   10364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:32:33.102122   10364 command_runner.go:130] > Device: 0,112	Inode: 1758        Links: 1
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Modify: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Change: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.103099   10364 command_runner.go:130] >  Birth: -
	I1217 00:32:33.103099   10364 start.go:564] Will wait 60s for crictl version
	I1217 00:32:33.106627   10364 ssh_runner.go:195] Run: which crictl
	I1217 00:32:33.116038   10364 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:32:33.119921   10364 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:32:33.163697   10364 command_runner.go:130] > Version:  0.1.0
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeName:  docker
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:32:33.163697   10364 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:32:33.167790   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.207644   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.212842   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.256029   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.258896   10364 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:32:33.262892   10364 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:32:33.463377   10364 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:32:33.467155   10364 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:32:33.475542   10364 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1217 00:32:33.478907   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:33.533350   10364 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:32:33.533350   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:33.537278   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.575248   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.575248   10364 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:32:33.579121   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.614970   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.615141   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.615171   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.615171   10364 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:32:33.615171   10364 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:32:33.615349   10364 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:32:33.618510   10364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:32:34.052354   10364 command_runner.go:130] > cgroupfs
	I1217 00:32:34.052472   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:34.052529   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:34.052529   10364 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:32:34.052529   10364 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:32:34.052529   10364 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:32:34.056808   10364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:32:34.073105   10364 command_runner.go:130] > kubeadm
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubectl
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubelet
	I1217 00:32:34.073240   10364 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:32:34.077459   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:32:34.090893   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:32:34.114750   10364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:32:34.135531   10364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 00:32:34.159985   10364 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:32:34.168280   10364 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:32:34.172492   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:34.310890   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:34.700023   10364 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:32:34.700115   10364 certs.go:195] generating shared ca certs ...
	I1217 00:32:34.700115   10364 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:34.700485   10364 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:32:34.701055   10364 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:32:34.701055   10364 certs.go:257] generating profile certs ...
	I1217 00:32:34.701864   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:32:34.702120   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:32:34.702437   10364 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:32:34.702487   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:32:34.702646   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:32:34.703540   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:32:34.703598   10364 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:32:34.704137   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:32:34.704439   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:34.704970   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem -> /usr/share/ca-certificates/4168.pem
	I1217 00:32:34.705196   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /usr/share/ca-certificates/41682.pem
	I1217 00:32:34.706089   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:32:34.736497   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:32:34.769712   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:32:34.802984   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:32:34.830525   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:32:34.860563   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:32:34.889179   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:32:34.920536   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:32:34.947027   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:32:34.978500   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:32:35.008458   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:32:35.040774   10364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:32:35.063574   10364 ssh_runner.go:195] Run: openssl version
	I1217 00:32:35.083169   10364 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:32:35.087374   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.105491   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:32:35.130590   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.144343   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.192130   10364 command_runner.go:130] > b5213941
	I1217 00:32:35.199882   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:32:35.220625   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.238544   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:32:35.259065   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266549   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266638   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.271223   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.315698   10364 command_runner.go:130] > 51391683
	I1217 00:32:35.322687   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:32:35.339650   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.358290   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:32:35.374891   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.387660   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.431595   10364 command_runner.go:130] > 3ec20f2e
	I1217 00:32:35.436891   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:32:35.453526   10364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:32:35.462183   10364 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: 2025-12-17 00:28:21.018933524 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Modify: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Change: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] >  Birth: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.466206   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:32:35.509324   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.514900   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:32:35.558615   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.563444   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:32:35.608112   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.612517   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:32:35.657914   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.662797   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:32:35.707243   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.713694   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:32:35.760477   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.761002   10364 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:35.764353   10364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:32:35.796231   10364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:32:35.810900   10364 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:32:35.810996   10364 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:32:35.810996   10364 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:32:35.815318   10364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:32:35.828811   10364 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:32:35.832840   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:35.889236   10364 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-409700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.889236   10364 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-409700" cluster setting kubeconfig missing "functional-409700" context setting]
	I1217 00:32:35.889236   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.906814   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.907042   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:35.908414   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:32:35.912354   10364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:32:35.931570   10364 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 00:32:35.931672   10364 kubeadm.go:602] duration metric: took 120.6751ms to restartPrimaryControlPlane
	I1217 00:32:35.931672   10364 kubeadm.go:403] duration metric: took 170.6688ms to StartCluster
	I1217 00:32:35.931672   10364 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.931672   10364 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.932861   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.933736   10364 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 00:32:35.933736   10364 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:32:35.933901   10364 addons.go:70] Setting storage-provisioner=true in profile "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:239] Setting addon storage-provisioner=true in "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:70] Setting default-storageclass=true in profile "functional-409700"
	I1217 00:32:35.934051   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:35.934098   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:35.934098   10364 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-409700"
	I1217 00:32:35.936531   10364 out.go:179] * Verifying Kubernetes components...
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.944620   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:36.000654   10364 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:36.002654   10364 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.002654   10364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:32:36.005647   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.010648   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:36.011652   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:36.012648   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:36.012648   10364 addons.go:239] Setting addon default-storageclass=true in "functional-409700"
	I1217 00:32:36.012648   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:36.019655   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:36.056654   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.069645   10364 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.069645   10364 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:32:36.072658   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.098645   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:36.122646   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.187680   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.202921   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.260682   10364 node_ready.go:35] waiting up to 6m0s for node "functional-409700" to be "Ready" ...
	I1217 00:32:36.260849   10364 type.go:168] "Request Body" body=""
	I1217 00:32:36.261061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:36.264195   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:36.265260   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.336693   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.340106   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.340627   10364 retry.go:31] will retry after 202.939607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.388976   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.393288   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.393853   10364 retry.go:31] will retry after 227.289762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.548879   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.622050   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.626260   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626260   10364 retry.go:31] will retry after 395.113457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626489   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.698520   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.702459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.702459   10364 retry.go:31] will retry after 468.39049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.026805   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.111151   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.116224   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.116762   10364 retry.go:31] will retry after 792.119284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.177175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.249858   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.255359   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.255359   10364 retry.go:31] will retry after 596.241339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.265542   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:37.265542   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:37.267933   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:37.856198   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.913554   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.941640   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.944331   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.944331   10364 retry.go:31] will retry after 571.98292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.986334   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.989310   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.989310   10364 retry.go:31] will retry after 625.589854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.268385   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:38.268385   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:38.271420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:38.521873   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:38.599872   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.599872   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.599872   10364 retry.go:31] will retry after 1.272749266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.621006   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:38.701213   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.701287   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.701287   10364 retry.go:31] will retry after 729.524766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.272125   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:39.272125   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:39.274907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:39.436175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:39.531183   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.531183   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.531183   10364 retry.go:31] will retry after 993.07118ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.877780   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:39.947906   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.950459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.950459   10364 retry.go:31] will retry after 981.929326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.275982   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:40.275982   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:40.278602   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:40.529721   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:40.604194   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:40.610090   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.610090   10364 retry.go:31] will retry after 3.313570586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.937823   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:41.010101   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:41.013448   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.013448   10364 retry.go:31] will retry after 3.983327016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.279217   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:41.279217   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:41.282049   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:42.282642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:42.282642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:42.285895   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.285957   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:43.285957   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:43.289436   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.928516   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:44.010824   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:44.016536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.016536   10364 retry.go:31] will retry after 3.387443088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.290770   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:44.290770   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:44.293999   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:45.002652   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:45.076704   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:45.080905   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.080905   10364 retry.go:31] will retry after 2.289915246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.294211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:45.294211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:45.297045   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:46.297784   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:46.297784   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.300989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:32:46.300989   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:46.300989   10364 type.go:168] "Request Body" body=""
	I1217 00:32:46.300989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.304308   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.305471   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:47.305471   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:47.308634   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.375936   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:47.409078   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:47.458764   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.458804   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.458804   10364 retry.go:31] will retry after 7.569688135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.484927   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.488464   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.488464   10364 retry.go:31] will retry after 9.157991048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:48.309180   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:48.309180   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:48.312403   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:49.312469   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:49.312469   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:49.315488   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:50.316234   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:50.316234   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:50.319889   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:51.320680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:51.320680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:51.324928   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:32:52.325755   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:52.325755   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:52.328987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:53.329277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:53.329277   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:53.332508   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:54.333122   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:54.333449   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:54.337390   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:55.034235   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:55.110067   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:55.114541   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.114568   10364 retry.go:31] will retry after 11.854567632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.338017   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:55.338017   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:55.341093   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:56.341403   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:56.341403   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.344366   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:32:56.344366   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:56.344366   10364 type.go:168] "Request Body" body=""
	I1217 00:32:56.344898   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.347007   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:56.652443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:56.739536   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:56.739536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:56.739536   10364 retry.go:31] will retry after 10.780280137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:57.347379   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:57.347379   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:57.350807   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:58.351069   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:58.351069   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:58.354096   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:59.354451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:59.354451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:59.357775   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:00.357853   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:00.357853   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:00.362050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:01.362288   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:01.362722   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:01.365594   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:02.365849   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:02.366254   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:02.369208   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:03.369619   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:03.369619   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:03.373087   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:04.373596   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:04.373596   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:04.376267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:05.376901   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:05.376901   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:05.380341   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:06.380779   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:06.380779   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.384486   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:06.384486   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:06.384486   10364 type.go:168] "Request Body" body=""
	I1217 00:33:06.384486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.386883   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:06.975138   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:07.047365   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.053212   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.053212   10364 retry.go:31] will retry after 9.4400792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.388016   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:07.388016   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:07.391682   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:07.525003   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:07.600422   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.604097   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.604097   10364 retry.go:31] will retry after 21.608180779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:08.392667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:08.392667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:08.395310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:09.395626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:09.395626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:09.400417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:10.400757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:10.400757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:10.403934   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:11.404855   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:11.404855   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:11.407439   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:12.407525   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:12.407525   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:12.410864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:13.411229   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:13.411229   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:13.414667   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:14.414815   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:14.414815   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:14.417914   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:15.418400   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:15.418400   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:15.421658   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:16.421803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:16.421803   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.424468   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:16.424468   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:16.425000   10364 type.go:168] "Request Body" body=""
	I1217 00:33:16.425000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.427532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:16.499443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:16.577484   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:16.582973   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:16.583014   10364 retry.go:31] will retry after 31.220452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:17.427856   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:17.427856   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:17.430661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:18.431189   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:18.431189   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:18.434303   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:19.434667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:19.434667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:19.437774   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:20.438018   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:20.438018   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:20.441284   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:21.442005   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:21.442005   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:21.445477   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:22.446517   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:22.446517   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:22.451991   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:33:23.452224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:23.452224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:23.455297   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:24.455662   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:24.455662   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:24.458123   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:25.458634   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:25.458634   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:25.461576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:26.462089   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:26.462563   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.465489   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:26.465489   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:26.465647   10364 type.go:168] "Request Body" body=""
	I1217 00:33:26.465647   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.468381   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:27.469289   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:27.469617   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:27.472277   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:28.472725   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:28.473201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:28.476219   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:29.218035   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:29.290496   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:29.295368   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.295368   10364 retry.go:31] will retry after 28.200848873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.476644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:29.476644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:29.479582   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:30.480382   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:30.480382   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:30.483362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:31.484451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:31.484451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:31.488344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:32.488579   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:32.488579   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:32.491919   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:33.492204   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:33.492204   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:33.494785   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:34.495401   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:34.495401   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:34.499412   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:35.499565   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:35.500315   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:35.503299   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:36.504300   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:36.504300   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.507870   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:36.507973   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:36.508033   10364 type.go:168] "Request Body" body=""
	I1217 00:33:36.508113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.510973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:37.511257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:37.511257   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:37.514688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:38.514936   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:38.514936   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:38.518386   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:39.518923   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:39.518923   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:39.520922   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:33:40.521680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:40.521680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:40.524367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:41.525837   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:41.526267   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:41.528903   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:42.529201   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:42.529201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:42.531842   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:43.532127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:43.532127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:43.534820   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:44.536381   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:44.536381   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:44.539631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:45.540548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:45.540548   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:45.543978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:46.544552   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:46.544552   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.547995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:46.547995   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:46.547995   10364 type.go:168] "Request Body" body=""
	I1217 00:33:46.547995   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.550843   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:47.551203   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:47.551203   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:47.554480   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:47.809190   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:47.891444   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:47.895455   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:47.895455   10364 retry.go:31] will retry after 48.235338214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:48.554744   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:48.554744   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:48.557563   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:49.558144   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:49.558144   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:49.560984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:50.561573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:50.561999   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:50.564681   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:51.564893   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:51.565218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:51.567822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:52.568697   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:52.568697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:52.572043   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:53.572367   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:53.572367   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:53.575543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:54.576655   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:54.576655   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:54.579628   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:55.580688   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:55.580688   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:55.583829   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:56.585061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:56.585061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.589344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:56.589344   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:56.589879   10364 type.go:168] "Request Body" body=""
	I1217 00:33:56.589987   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.592329   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:57.501146   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:57.569298   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:57.571601   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.571601   10364 retry.go:31] will retry after 30.590824936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.593179   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:57.593179   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:57.595184   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:58.596116   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:58.596302   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:58.598982   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:59.599603   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:59.599603   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:59.602661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:00.602875   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:00.603290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:00.606460   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:01.607309   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:01.607677   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:01.609972   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:02.611301   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:02.611301   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:02.614599   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:03.614800   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:03.614800   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:03.618177   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:04.618602   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:04.618996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:04.624198   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:05.625646   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:05.625646   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:05.629762   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:06.630421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:06.630421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.633232   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:06.633232   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:06.633809   10364 type.go:168] "Request Body" body=""
	I1217 00:34:06.633809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.638868   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:07.639683   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:07.639683   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:07.643176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:08.643409   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:08.643409   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:08.646509   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:09.647445   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:09.647445   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:09.650342   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:10.650843   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:10.651408   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:10.653984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:11.654782   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:11.654782   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:11.660510   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:12.661264   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:12.661264   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:12.664725   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:13.665643   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:13.665643   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:13.668534   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:14.669351   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:14.669351   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:14.673188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:15.673306   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:15.673709   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:15.675803   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:16.676778   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:16.676778   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.679773   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:16.679872   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:16.679999   10364 type.go:168] "Request Body" body=""
	I1217 00:34:16.680102   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.682768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:17.683817   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:17.683817   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:17.686822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:18.687027   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:18.687027   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:18.690241   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:19.690694   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:19.690694   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:19.693877   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:20.694298   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:20.694605   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:20.697314   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:21.697742   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:21.697742   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:21.700603   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:22.701210   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:22.701210   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:22.704640   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:23.705172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:23.705172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:23.707560   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:24.708954   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:24.708954   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:24.712011   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:25.712539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:25.712539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:25.717818   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:26.717996   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:26.717996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.721620   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:26.721620   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:26.721620   10364 type.go:168] "Request Body" body=""
	I1217 00:34:26.721620   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.725519   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:27.726686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:27.726686   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:27.729112   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:28.168229   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:34:28.439129   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439129   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439671   10364 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:28.730022   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:28.730022   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:28.732579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:29.733316   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:29.733316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:29.737180   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:30.737898   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:30.738218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:30.740633   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:31.741637   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:31.741637   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:31.744968   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:32.745244   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:32.745244   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:32.748688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:33.749681   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:33.749681   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:33.753864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:34.754458   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:34.754458   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:34.757550   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:35.757989   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:35.757989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:35.762318   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:36.136043   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:34:36.218441   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:36.231181   10364 out.go:179] * Enabled addons: 
	I1217 00:34:36.235148   10364 addons.go:530] duration metric: took 2m0.3003648s for enable addons: enabled=[]
	I1217 00:34:36.762736   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:36.762736   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.765107   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:36.765107   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:36.765107   10364 type.go:168] "Request Body" body=""
	I1217 00:34:36.765638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.768239   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:37.768638   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:37.768638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:37.772263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:38.772833   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:38.772833   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:38.775690   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:39.776860   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:39.776860   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:39.779543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:40.779907   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:40.779907   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:40.782631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:41.783358   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:41.783809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:41.787117   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:42.787421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:42.787421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:42.790478   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:43.791393   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:43.791393   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:43.794768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:44.795719   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:44.795719   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:44.799050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:45.799750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:45.800118   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:45.802333   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:46.802808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:46.802808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.806272   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:46.806272   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:46.806272   10364 type.go:168] "Request Body" body=""
	I1217 00:34:46.806272   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.808808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:47.809106   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:47.809106   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:47.812072   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:48.812377   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:48.812377   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:48.815804   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:49.816160   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:49.816160   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:49.819073   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:50.819687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:50.819687   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:50.824808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:51.825256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:51.825256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:51.827149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:34:52.828172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:52.828172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:52.831194   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:53.831502   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:53.831502   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:53.835949   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:54.836430   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:54.836430   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:54.840704   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:55.840945   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:55.840945   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:55.844273   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:56.844698   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:56.844774   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.847718   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:56.847718   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:56.847718   10364 type.go:168] "Request Body" body=""
	I1217 00:34:56.847718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.850361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:57.850724   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:57.850724   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:57.853992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:58.854839   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:58.854839   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:58.857985   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:59.858686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:59.859048   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:59.863493   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:00.863731   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:00.863731   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:00.867009   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:01.867548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:01.867986   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:01.870485   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:02.870682   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:02.870682   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:02.874134   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:03.874927   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:03.874927   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:03.877992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:04.878757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:04.878757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:04.882012   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:05.882985   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:05.882985   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:05.886320   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:06.887395   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:06.887395   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.890772   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:06.890844   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:06.890844   10364 type.go:168] "Request Body" body=""
	I1217 00:35:06.890844   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.892912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:07.893541   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:07.893541   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:07.897243   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:08.897423   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:08.897423   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:08.901955   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:09.902222   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:09.902222   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:09.905347   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:10.906346   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:10.906346   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:10.909589   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:11.910013   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:11.910424   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:11.913496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:12.913792   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:12.913792   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:12.917334   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:13.917794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:13.917794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:13.920911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:14.921451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:14.921902   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:14.924686   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:15.925539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:15.925539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:15.928618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:16.928871   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:16.928871   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.932364   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:16.932364   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:16.932364   10364 type.go:168] "Request Body" body=""
	I1217 00:35:16.932364   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.935267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:17.936075   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:17.936075   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:17.939252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:18.940390   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:18.940390   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:18.943332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:19.943802   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:19.943802   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:19.946902   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:20.947509   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:20.947882   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:20.949988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:21.950644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:21.950644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:21.954065   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:22.954236   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:22.954236   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:22.958266   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:23.958794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:23.959062   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:23.961451   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:24.962012   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:24.962012   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:24.965125   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:25.965439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:25.965439   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:25.968637   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:26.968810   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:26.968810   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.971892   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:26.971961   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:26.972008   10364 type.go:168] "Request Body" body=""
	I1217 00:35:26.972008   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.977052   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:35:27.977730   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:27.977730   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:27.980941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:28.981406   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:28.981406   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:28.984099   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:29.985140   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:29.985452   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:29.988385   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:30.989318   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:30.989318   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:30.992251   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:31.993148   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:31.993515   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:31.996483   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:32.996803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:32.997153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:32.999821   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:33.999930   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:33.999930   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:34.003148   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:35.003410   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:35.003410   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:35.006455   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:36.008349   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:36.008349   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:36.010952   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:37.011100   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:37.011100   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.014149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:37.014149   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:37.014149   10364 type.go:168] "Request Body" body=""
	I1217 00:35:37.014678   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.016502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:35:38.017464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:38.017464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:38.020305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:39.020641   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:39.020641   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:39.023532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:40.024042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:40.024042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:40.027707   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:41.027942   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:41.027942   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:41.031346   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:42.032292   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:42.032292   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:42.035463   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:43.035799   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:43.036298   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:43.039139   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:44.039453   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:44.039453   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:44.042907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:45.043589   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:45.043589   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:45.046766   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:46.047648   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:46.047648   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:46.051224   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:47.051642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:47.051642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.054716   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:47.054716   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:47.054716   10364 type.go:168] "Request Body" body=""
	I1217 00:35:47.054716   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.056987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:48.058345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:48.058345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:48.061555   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:49.061851   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:49.061851   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:49.065062   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:50.065656   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:50.065933   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:50.068127   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:51.068865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:51.069263   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:51.071479   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:52.072199   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:52.072199   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:52.075414   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:53.076211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:53.076211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:53.079310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:54.079644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:54.079644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:54.083395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:55.083663   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:55.083663   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:55.086632   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:56.087097   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:56.087494   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:56.091591   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:57.091913   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:57.092314   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.095048   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:35:57.095048   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:57.095048   10364 type.go:168] "Request Body" body=""
	I1217 00:35:57.095640   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.098264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:58.098993   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:58.098993   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:58.101747   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:59.103113   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:59.103113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:59.105884   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:00.107028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:00.107028   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:00.109881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:01.110650   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:01.110650   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:01.114650   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:02.114915   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:02.114915   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:02.118186   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:03.118580   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:03.118580   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:03.121988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:04.123025   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:04.123025   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:04.126587   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:05.127042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:05.127451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:05.132256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:06.132687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:06.133104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:06.135375   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:07.137054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:07.137054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.140223   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:07.140223   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:07.140223   10364 type.go:168] "Request Body" body=""
	I1217 00:36:07.140223   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.142965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:08.143629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:08.143629   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:08.147215   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:09.147522   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:09.147522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:09.150564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:10.151061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:10.151061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:10.153608   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:11.154626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:11.154626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:11.157406   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:12.158277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:12.158752   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:12.162911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:13.163269   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:13.163269   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:13.166264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:14.166990   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:14.166990   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:14.171561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:15.171912   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:15.171912   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:15.175056   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:16.176256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:16.176256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:16.179133   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:17.179808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:17.179808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.182925   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:17.182976   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:17.183085   10364 type.go:168] "Request Body" body=""
	I1217 00:36:17.183154   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.186098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:18.186373   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:18.186373   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:18.188978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:19.189978   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:19.189978   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:19.193521   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:20.193758   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:20.194053   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:20.196502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:21.196916   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:21.196916   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:21.200034   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:22.200545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:22.200545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:22.204008   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:23.205276   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:23.205569   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:23.207867   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:24.208451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:24.208451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:24.211642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:25.212042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:25.212042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:25.214973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:26.215279   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:26.215279   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:26.218537   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:27.219034   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:27.219034   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.221530   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:27.221530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:27.222255   10364 type.go:168] "Request Body" body=""
	I1217 00:36:27.222319   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.225150   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:28.225829   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:28.225829   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:28.229281   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:29.229629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:29.229922   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:29.232417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:30.233433   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:30.233433   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:30.236676   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:31.237185   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:31.237185   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:31.240270   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:32.240968   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:32.241316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:32.244151   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:33.244415   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:33.244415   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:33.248305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:34.248592   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:34.248592   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:34.252121   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:35.252241   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:35.252241   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:35.254173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:36.254586   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:36.254586   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:36.257572   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:37.258337   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:37.258337   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.261475   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:37.261475   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:37.262206   10364 type.go:168] "Request Body" body=""
	I1217 00:36:37.262532   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.264961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:38.265631   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:38.265854   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:38.268561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:39.269290   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:39.269290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:39.271879   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:40.272273   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:40.272273   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:40.275242   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:41.276205   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:41.276623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:41.278866   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:42.279206   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:42.279206   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:42.282173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:43.282751   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:43.282751   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:43.285875   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:44.286756   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:44.287077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:44.289831   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:45.290159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:45.290159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:45.293298   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:46.294545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:46.294545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:46.297578   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:47.297935   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:47.297935   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.300692   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:47.300692   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:47.300692   10364 type.go:168] "Request Body" body=""
	I1217 00:36:47.300692   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.302635   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:48.303208   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:48.303208   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:48.306418   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:49.306667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:49.307130   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:49.309815   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:50.310768   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:50.310768   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:50.313618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:51.314224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:51.314224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:51.316809   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:52.317523   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:52.317523   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:52.322067   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:53.322359   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:53.322359   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:53.325176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:54.325549   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:54.325549   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:54.328395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:55.328984   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:55.329339   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:55.334171   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:56.334464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:56.334464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:56.337612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:57.337960   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:57.337960   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.340932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:57.341462   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:57.341593   10364 type.go:168] "Request Body" body=""
	I1217 00:36:57.341654   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.344564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:58.345573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:58.345573   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:58.348987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:59.349186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:59.349186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:59.352680   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:00.353127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:00.353127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:00.355791   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:01.356152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:01.356152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:01.360722   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:02.361585   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:02.362214   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:02.364765   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:03.365485   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:03.365485   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:03.368349   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:04.368821   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:04.368821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:04.371965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:05.372332   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:05.372332   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:05.375376   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:06.376031   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:06.376031   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:06.378850   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:07.380334   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:07.380334   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.383178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:07.383178   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:07.383178   10364 type.go:168] "Request Body" body=""
	I1217 00:37:07.383178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.386449   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:08.387594   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:08.388059   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:08.391028   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:09.391186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:09.391186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:09.394448   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:10.394971   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:10.394971   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:10.399668   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:11.400389   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:11.400389   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:11.403573   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:12.404531   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:12.404531   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:12.407846   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:13.408153   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:13.408153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:13.411907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:14.412175   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:14.412175   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:14.415697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:15.416228   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:15.416228   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:15.419897   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:16.420794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:16.420794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:16.424642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:17.424997   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:17.424997   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.428835   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:17.428983   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:17.428983   10364 type.go:168] "Request Body" body=""
	I1217 00:37:17.428983   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.432188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:18.433366   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:18.433366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:18.437105   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:19.437417   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:19.437866   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:19.443541   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:37:20.444729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:20.444729   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:20.447421   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:21.447798   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:21.447798   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:21.450995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:22.451672   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:22.451672   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:22.454367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:23.455345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:23.455345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:23.458961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:24.459152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:24.459152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:24.462362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:25.462863   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:25.462863   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:25.465098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:26.465439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:26.465821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:26.468832   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:27.469064   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:27.469454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.472358   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:27.472422   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:27.472536   10364 type.go:168] "Request Body" body=""
	I1217 00:37:27.472615   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.475175   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:28.475953   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:28.475953   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:28.479074   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:29.479701   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:29.479701   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:29.482529   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:30.483219   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:30.483219   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:30.486254   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:31.487104   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:31.487104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:31.489733   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:32.490240   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:32.490767   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:32.493579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:33.493807   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:33.494211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:33.497178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:34.497955   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:34.497955   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:34.501263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:35.501483   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:35.501483   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:35.504417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:36.504622   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:36.504622   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:36.508593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:37.509653   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:37.509653   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.512288   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:37.512288   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:37.512424   10364 type.go:168] "Request Body" body=""
	I1217 00:37:37.512522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.514595   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:38.514845   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:38.514845   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:38.517717   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:39.518411   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:39.518411   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:39.520864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:40.521889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:40.521889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:40.525103   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:41.525419   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:41.525419   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:41.528361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:42.528733   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:42.529149   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:42.532111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:43.532896   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:43.532896   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:43.536252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:44.536867   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:44.536867   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:44.540157   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:45.540486   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:45.540486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:45.543711   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:46.543879   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:46.543879   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:46.546377   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:47.546832   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:47.546832   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.550543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:47.550543   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:47.550643   10364 type.go:168] "Request Body" body=""
	I1217 00:37:47.550786   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.552960   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:48.553202   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:48.553202   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:48.558015   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:49.559371   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:49.559371   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:49.562548   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:50.562966   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:50.562966   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:50.565800   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:51.566293   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:51.566623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:51.569597   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:52.570511   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:52.570511   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:52.573392   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:53.573965   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:53.573965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:53.576340   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:54.577062   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:54.577463   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:54.579836   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:55.580473   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:55.580473   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:55.583734   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:56.584454   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:56.584454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:56.587256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:57.588397   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:57.588397   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.593527   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1217 00:37:57.593527   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:57.593527   10364 type.go:168] "Request Body" body=""
	I1217 00:37:57.593527   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.597825   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:58.598550   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:58.598550   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:58.602122   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:59.602444   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:59.602444   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:59.605501   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:00.606096   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:00.606096   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:00.608989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:01.609865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:01.609965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:01.613038   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:02.613818   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:02.614067   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:02.617196   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:03.617950   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:03.618366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:03.621156   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:04.621587   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:04.621587   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:04.624616   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:05.625123   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:05.625123   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:05.627780   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:06.628169   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:06.628602   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:06.632684   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:38:07.633450   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:07.633450   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.636697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:07.636697   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:07.636697   10364 type.go:168] "Request Body" body=""
	I1217 00:38:07.636697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.638671   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:08.639000   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:08.639000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:08.642420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:09.642718   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:09.642718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:09.645881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:10.646391   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:10.646391   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:10.649653   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:11.650077   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:11.650077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:11.653855   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:12.654508   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:12.654508   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:12.657918   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:13.658238   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:13.658238   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:13.661446   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:14.661684   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:14.661684   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:14.664655   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:15.665257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:15.665578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:15.672111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1217 00:38:16.672363   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:16.672363   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:16.675593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:17.676054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:17.676054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.679454   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:17.679454   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:17.679454   10364 type.go:168] "Request Body" body=""
	I1217 00:38:17.679454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.681452   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:18.682087   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:18.682087   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:18.685399   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:19.686028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:19.686535   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:19.689161   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:20.689948   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:20.690239   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:20.692554   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:21.693716   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:21.694009   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:21.696661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:22.697780   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:22.697780   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:22.700917   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:23.702225   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:23.702225   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:23.705612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:24.706750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:24.706750   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:24.710496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:25.710729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:25.711065   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:25.713912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:26.714178   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:26.714178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:26.718058   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:27.718245   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:27.718578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.721305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:27.721375   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:27.721441   10364 type.go:168] "Request Body" body=""
	I1217 00:38:27.721441   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.723332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:28.723805   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:28.724207   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:28.727033   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:29.727723   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:29.727723   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:29.730941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:30.731355   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:30.731355   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:30.734083   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:31.734645   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:31.734645   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:31.737932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:32.738159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:32.738159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:32.741332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:33.741889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:33.741889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:33.744576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:34.745133   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:34.745546   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:34.747888   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:35.749177   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:35.749177   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:35.751796   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:36.264530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 00:38:36.264530   10364 node_ready.go:38] duration metric: took 6m0.0004133s for node "functional-409700" to be "Ready" ...
	I1217 00:38:36.268017   10364 out.go:203] 
	W1217 00:38:36.270772   10364 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:38:36.270772   10364 out.go:285] * 
	W1217 00:38:36.272556   10364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:38:36.275101   10364 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065379308Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065401310Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065424712Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065461915Z" level=info msg="Initializing buildkit"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.183346289Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191707575Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191889990Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191902191Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191916192Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:32:32 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:32 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:32:33 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:38:39.023881   17420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:38:39.024707   17420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:38:39.027607   17420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:38:39.030270   17420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:38:39.031166   17420 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000806] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000803] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000826] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000811] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000815] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:32] CPU: 7 PID: 54557 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7f3abb92bb20
	[  +0.000446] Code: Unable to access opcode bytes at RIP 0x7f3abb92baf6.
	[  +0.000672] RSP: 002b:00007ffe2fcb88c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000804] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000788] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000852] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001011] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001269] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001111] FS:  0000000000000000 GS:  0000000000000000
	[  +0.944697] CPU: 4 PID: 54682 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000867] RIP: 0033:0x7fa9cdbc0b20
	[  +0.000408] Code: Unable to access opcode bytes at RIP 0x7fa9cdbc0af6.
	[  +0.000668] RSP: 002b:00007ffde5330df0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001045] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:38:39 up 57 min,  0 user,  load average: 0.19, 0.32, 0.58
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:38:36 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:38:36 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 819.
	Dec 17 00:38:36 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:36 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:36 functional-409700 kubelet[17256]: E1217 00:38:36.787770   17256 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:38:36 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:38:36 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:38:37 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 820.
	Dec 17 00:38:37 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:37 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:37 functional-409700 kubelet[17268]: E1217 00:38:37.512503   17268 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:38:37 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:38:37 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:38:38 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 821.
	Dec 17 00:38:38 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:38 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:38 functional-409700 kubelet[17296]: E1217 00:38:38.271469   17296 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:38:38 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:38:38 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:38:38 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 822.
	Dec 17 00:38:38 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:38 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:38:39 functional-409700 kubelet[17409]: E1217 00:38:39.019709   17409 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:38:39 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:38:39 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (597.4927ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/SoftStart (374.74s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-409700 get po -A
functional_test.go:711: (dbg) Non-zero exit: kubectl --context functional-409700 get po -A: exit status 1 (50.3755318s)

                                                
                                                
** stderr ** 
	E1217 00:38:50.851425    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:39:00.895648    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:39:10.940826    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:39:20.979696    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:39:31.022292    8280 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:713: failed to get kubectl pods: args "kubectl --context functional-409700 get po -A" : exit status 1
functional_test.go:717: expected stderr to be empty but got *"E1217 00:38:50.851425    8280 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:56622/api?timeout=32s\\\": EOF\"\nE1217 00:39:00.895648    8280 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:56622/api?timeout=32s\\\": EOF\"\nE1217 00:39:10.940826    8280 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:56622/api?timeout=32s\\\": EOF\"\nE1217 00:39:20.979696    8280 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:56622/api?timeout=32s\\\": EOF\"\nE1217 00:39:31.022292    8280 memcache.go:265] \"Unhandled Error\" err=\"couldn't get current server API group list: Get \\\"https://127.0.0.1:56622/api?timeout=32s\\\": EOF\"\nUnable to connect to the server: EOF\n"*: args "kubectl --context functio
nal-409700 get po -A"
functional_test.go:720: expected stdout to include *kube-system* but got *""*. args: "kubectl --context functional-409700 get po -A"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (603.0094ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.169833s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                                ARGS                                                                                 │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-045600 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                            │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ service        │ functional-045600 service hello-node --url --format={{.IP}}                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ ssh            │ functional-045600 ssh sudo cat /etc/test/nested/copy/4168/hosts                                                                                                     │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ docker-env     │ functional-045600 docker-env                                                                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │ 17 Dec 25 00:18 UTC │
	│ dashboard      │ --url --port 36195 -p functional-045600 --alsologtostderr -v=1                                                                                                      │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ service        │ functional-045600 service hello-node --url                                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:18 UTC │                     │
	│ cp             │ functional-045600 cp testdata\cp-test.txt /home/docker/cp-test.txt                                                                                                  │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /home/docker/cp-test.txt                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ cp             │ functional-045600 cp functional-045600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2737548863\001\cp-test.txt │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /home/docker/cp-test.txt                                                                                        │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ cp             │ functional-045600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                           │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh -n functional-045600 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format short --alsologtostderr                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format json --alsologtostderr                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format table --alsologtostderr                                                                                                         │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls --format yaml --alsologtostderr                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh pgrep buildkitd                                                                                                                               │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │                     │
	│ image          │ functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls                                                                                                                                          │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                                                             │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ delete         │ -p functional-045600                                                                                                                                                │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │ 17 Dec 25 00:23 UTC │
	│ start          │ -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-409700 --alsologtostderr -v=8                                                                                                                         │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:32:25
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:32:25.884023   10364 out.go:360] Setting OutFile to fd 1372 ...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.926022   10364 out.go:374] Setting ErrFile to fd 1800...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.940016   10364 out.go:368] Setting JSON to false
	I1217 00:32:25.942016   10364 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3134,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:32:25.942016   10364 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:32:25.946016   10364 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:32:25.948015   10364 notify.go:221] Checking for updates...
	I1217 00:32:25.950019   10364 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:25.952018   10364 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:32:25.955015   10364 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:32:25.957015   10364 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:32:25.960017   10364 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:32:25.964016   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:25.964016   10364 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:32:26.171156   10364 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:32:26.176438   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.427526   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.406486235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.434528   10364 out.go:179] * Using the docker driver based on existing profile
	I1217 00:32:26.436524   10364 start.go:309] selected driver: docker
	I1217 00:32:26.436524   10364 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.436524   10364 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:32:26.442525   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.668518   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.649642613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.752324   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:26.752324   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:26.752324   10364 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.755825   10364 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:32:26.757701   10364 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:32:26.760609   10364 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:32:26.762036   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:26.763103   10364 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:32:26.763103   10364 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:32:26.763103   10364 cache.go:65] Caching tarball of preloaded images
	I1217 00:32:26.763399   10364 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:32:26.763399   10364 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:32:26.763399   10364 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:32:26.840670   10364 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:32:26.840729   10364 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:32:26.840729   10364 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:32:26.840729   10364 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:32:26.840729   10364 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:32:26.840729   10364 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:32:26.840729   10364 fix.go:54] fixHost starting: 
	I1217 00:32:26.848208   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:26.901821   10364 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:32:26.901821   10364 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:32:26.907276   10364 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:32:26.907373   10364 machine.go:94] provisionDockerMachine start ...
	I1217 00:32:26.910817   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:26.967003   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:26.967068   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:26.967068   10364 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:32:27.152656   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.152656   10364 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:32:27.156074   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.214234   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.214712   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.214757   10364 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:32:27.407594   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.413090   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.490102   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.490703   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.490749   10364 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:32:27.672866   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:27.672866   10364 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:32:27.672866   10364 ubuntu.go:190] setting up certificates
	I1217 00:32:27.672866   10364 provision.go:84] configureAuth start
	I1217 00:32:27.676807   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:27.732901   10364 provision.go:143] copyHostCerts
	I1217 00:32:27.733091   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:32:27.733091   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:32:27.734330   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:32:27.734382   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:32:27.735088   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1217 00:32:27.735088   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:32:27.735088   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:32:27.735728   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:32:27.736339   10364 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:32:27.847670   10364 provision.go:177] copyRemoteCerts
	I1217 00:32:27.851712   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:32:27.854410   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.907971   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:28.027015   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1217 00:32:28.027015   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:32:28.064351   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1217 00:32:28.064351   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:32:28.092479   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1217 00:32:28.092479   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:32:28.124650   10364 provision.go:87] duration metric: took 451.7801ms to configureAuth
	I1217 00:32:28.124650   10364 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:32:28.125238   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:28.128674   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.184894   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.185614   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.185614   10364 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:32:28.351273   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:32:28.351273   10364 ubuntu.go:71] root file system type: overlay
	I1217 00:32:28.351273   10364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:32:28.355630   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.410840   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.411043   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.411043   10364 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:32:28.608128   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:32:28.612284   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.672356   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.672356   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.672356   10364 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:32:28.839586   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:28.839640   10364 machine.go:97] duration metric: took 1.9322227s to provisionDockerMachine
	I1217 00:32:28.839640   10364 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:32:28.839640   10364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:32:28.845012   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:32:28.847117   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.904187   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.040693   10364 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:32:29.050158   10364 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_ID="12"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:32:29.050158   10364 command_runner.go:130] > ID=debian
	I1217 00:32:29.050158   10364 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:32:29.050158   10364 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:32:29.050158   10364 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:32:29.050158   10364 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:32:29.050158   10364 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:32:29.050833   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:32:29.050833   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /etc/ssl/certs/41682.pem
	I1217 00:32:29.051707   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:32:29.051707   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> /etc/test/nested/copy/4168/hosts
	I1217 00:32:29.055303   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:32:29.070738   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:32:29.103807   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:32:29.133625   10364 start.go:296] duration metric: took 293.9818ms for postStartSetup
	I1217 00:32:29.137970   10364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:32:29.142249   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.194718   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.311046   10364 command_runner.go:130] > 1%
	I1217 00:32:29.316279   10364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:32:29.324732   10364 command_runner.go:130] > 950G
	I1217 00:32:29.324732   10364 fix.go:56] duration metric: took 2.4839807s for fixHost
	I1217 00:32:29.324732   10364 start.go:83] releasing machines lock for "functional-409700", held for 2.4839807s
	I1217 00:32:29.330157   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:29.384617   10364 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:32:29.388675   10364 ssh_runner.go:195] Run: cat /version.json
	I1217 00:32:29.388675   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.392044   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.442282   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.464827   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.558946   10364 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1217 00:32:29.559478   10364 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:32:29.581467   10364 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:32:29.585625   10364 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:29.598125   10364 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:32:29.598125   10364 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:32:29.602648   10364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:32:29.614417   10364 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:32:29.615099   10364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:32:29.621960   10364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:32:29.646439   10364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:32:29.646439   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:29.646439   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:29.646439   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:29.668226   10364 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:32:29.672516   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:32:29.695799   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:32:29.710451   10364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:32:29.715117   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 00:32:29.723829   10364 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:32:29.723829   10364 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:32:29.737249   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.756347   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:32:29.779698   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.801679   10364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:32:29.825863   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:32:29.844752   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:32:29.865139   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:32:29.885382   10364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:32:29.900142   10364 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:32:29.904180   10364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:32:29.922078   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:30.133548   10364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:32:30.412249   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:30.412298   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:30.416670   10364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Unit]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Description=Docker Application Container Engine
	I1217 00:32:30.435945   10364 command_runner.go:130] > Documentation=https://docs.docker.com
	I1217 00:32:30.435945   10364 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1217 00:32:30.435945   10364 command_runner.go:130] > Wants=network-online.target containerd.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > Requires=docker.socket
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitBurst=3
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitIntervalSec=60
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Service]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Type=notify
	I1217 00:32:30.435945   10364 command_runner.go:130] > Restart=always
	I1217 00:32:30.435945   10364 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1217 00:32:30.435945   10364 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1217 00:32:30.435945   10364 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1217 00:32:30.435945   10364 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1217 00:32:30.435945   10364 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1217 00:32:30.435945   10364 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1217 00:32:30.435945   10364 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1217 00:32:30.435945   10364 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNOFILE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNPROC=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitCORE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1217 00:32:30.435945   10364 command_runner.go:130] > TasksMax=infinity
	I1217 00:32:30.437404   10364 command_runner.go:130] > TimeoutStartSec=0
	I1217 00:32:30.437404   10364 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1217 00:32:30.437404   10364 command_runner.go:130] > Delegate=yes
	I1217 00:32:30.437404   10364 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1217 00:32:30.437404   10364 command_runner.go:130] > KillMode=process
	I1217 00:32:30.437404   10364 command_runner.go:130] > OOMScoreAdjust=-500
	I1217 00:32:30.437404   10364 command_runner.go:130] > [Install]
	I1217 00:32:30.437404   10364 command_runner.go:130] > WantedBy=multi-user.target
	I1217 00:32:30.443833   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.468114   10364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:32:30.542786   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.567969   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:32:30.586631   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:30.606342   10364 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1217 00:32:30.611878   10364 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:32:30.618659   10364 command_runner.go:130] > /usr/bin/cri-dockerd
	I1217 00:32:30.623279   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:32:30.636760   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:32:30.661689   10364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:32:30.828747   10364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:32:30.988536   10364 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:32:30.988536   10364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:32:31.016800   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:32:31.041396   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:31.178126   10364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:32:32.195651   10364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0175164s)
	I1217 00:32:32.199801   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:32:32.224938   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:32:32.247199   10364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:32:32.275016   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:32.297360   10364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:32:32.448301   10364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:32:32.597398   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.739627   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:32:32.765463   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:32:32.790341   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.929296   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:32:33.067092   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:33.087872   10364 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:32:33.092277   10364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:32:33.102122   10364 command_runner.go:130] > Device: 0,112	Inode: 1758        Links: 1
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Modify: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Change: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.103099   10364 command_runner.go:130] >  Birth: -
	I1217 00:32:33.103099   10364 start.go:564] Will wait 60s for crictl version
	I1217 00:32:33.106627   10364 ssh_runner.go:195] Run: which crictl
	I1217 00:32:33.116038   10364 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:32:33.119921   10364 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:32:33.163697   10364 command_runner.go:130] > Version:  0.1.0
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeName:  docker
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:32:33.163697   10364 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:32:33.167790   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.207644   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.212842   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.256029   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.258896   10364 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:32:33.262892   10364 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:32:33.463377   10364 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:32:33.467155   10364 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:32:33.475542   10364 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1217 00:32:33.478907   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:33.533350   10364 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:32:33.533350   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:33.537278   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.575248   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.575248   10364 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:32:33.579121   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.614970   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.615141   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.615171   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.615171   10364 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:32:33.615171   10364 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:32:33.615349   10364 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:32:33.618510   10364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:32:34.052354   10364 command_runner.go:130] > cgroupfs
	I1217 00:32:34.052472   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:34.052529   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:34.052529   10364 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:32:34.052529   10364 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:32:34.052529   10364 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:32:34.056808   10364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:32:34.073105   10364 command_runner.go:130] > kubeadm
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubectl
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubelet
	I1217 00:32:34.073240   10364 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:32:34.077459   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:32:34.090893   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:32:34.114750   10364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:32:34.135531   10364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 00:32:34.159985   10364 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:32:34.168280   10364 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:32:34.172492   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:34.310890   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:34.700023   10364 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:32:34.700115   10364 certs.go:195] generating shared ca certs ...
	I1217 00:32:34.700115   10364 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:34.700485   10364 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:32:34.701055   10364 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:32:34.701055   10364 certs.go:257] generating profile certs ...
	I1217 00:32:34.701864   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:32:34.702120   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:32:34.702437   10364 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:32:34.702487   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:32:34.702646   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:32:34.703540   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:32:34.703598   10364 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:32:34.704137   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:32:34.704439   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:34.704970   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem -> /usr/share/ca-certificates/4168.pem
	I1217 00:32:34.705196   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /usr/share/ca-certificates/41682.pem
	I1217 00:32:34.706089   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:32:34.736497   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:32:34.769712   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:32:34.802984   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:32:34.830525   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:32:34.860563   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:32:34.889179   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:32:34.920536   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:32:34.947027   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:32:34.978500   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:32:35.008458   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:32:35.040774   10364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:32:35.063574   10364 ssh_runner.go:195] Run: openssl version
	I1217 00:32:35.083169   10364 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:32:35.087374   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.105491   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:32:35.130590   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.144343   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.192130   10364 command_runner.go:130] > b5213941
	I1217 00:32:35.199882   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:32:35.220625   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.238544   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:32:35.259065   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266549   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266638   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.271223   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.315698   10364 command_runner.go:130] > 51391683
	I1217 00:32:35.322687   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:32:35.339650   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.358290   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:32:35.374891   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.387660   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.431595   10364 command_runner.go:130] > 3ec20f2e
	I1217 00:32:35.436891   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:32:35.453526   10364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:32:35.462183   10364 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: 2025-12-17 00:28:21.018933524 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Modify: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Change: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] >  Birth: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.466206   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:32:35.509324   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.514900   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:32:35.558615   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.563444   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:32:35.608112   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.612517   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:32:35.657914   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.662797   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:32:35.707243   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.713694   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:32:35.760477   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.761002   10364 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:35.764353   10364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:32:35.796231   10364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:32:35.810900   10364 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:32:35.810996   10364 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:32:35.810996   10364 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:32:35.815318   10364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:32:35.828811   10364 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:32:35.832840   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:35.889236   10364 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-409700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.889236   10364 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-409700" cluster setting kubeconfig missing "functional-409700" context setting]
	I1217 00:32:35.889236   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.906814   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.907042   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:35.908414   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:32:35.912354   10364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:32:35.931570   10364 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 00:32:35.931672   10364 kubeadm.go:602] duration metric: took 120.6751ms to restartPrimaryControlPlane
	I1217 00:32:35.931672   10364 kubeadm.go:403] duration metric: took 170.6688ms to StartCluster
	I1217 00:32:35.931672   10364 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.931672   10364 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.932861   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.933736   10364 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 00:32:35.933736   10364 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:32:35.933901   10364 addons.go:70] Setting storage-provisioner=true in profile "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:239] Setting addon storage-provisioner=true in "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:70] Setting default-storageclass=true in profile "functional-409700"
	I1217 00:32:35.934051   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:35.934098   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:35.934098   10364 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-409700"
	I1217 00:32:35.936531   10364 out.go:179] * Verifying Kubernetes components...
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.944620   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:36.000654   10364 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:36.002654   10364 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.002654   10364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:32:36.005647   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.010648   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:36.011652   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:36.012648   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:36.012648   10364 addons.go:239] Setting addon default-storageclass=true in "functional-409700"
	I1217 00:32:36.012648   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:36.019655   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:36.056654   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.069645   10364 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.069645   10364 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:32:36.072658   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.098645   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:36.122646   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.187680   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.202921   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.260682   10364 node_ready.go:35] waiting up to 6m0s for node "functional-409700" to be "Ready" ...
	I1217 00:32:36.260849   10364 type.go:168] "Request Body" body=""
	I1217 00:32:36.261061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:36.264195   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:36.265260   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.336693   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.340106   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.340627   10364 retry.go:31] will retry after 202.939607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.388976   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.393288   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.393853   10364 retry.go:31] will retry after 227.289762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.548879   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.622050   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.626260   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626260   10364 retry.go:31] will retry after 395.113457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626489   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.698520   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.702459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.702459   10364 retry.go:31] will retry after 468.39049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.026805   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.111151   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.116224   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.116762   10364 retry.go:31] will retry after 792.119284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.177175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.249858   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.255359   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.255359   10364 retry.go:31] will retry after 596.241339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.265542   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:37.265542   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:37.267933   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:37.856198   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.913554   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.941640   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.944331   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.944331   10364 retry.go:31] will retry after 571.98292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.986334   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.989310   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.989310   10364 retry.go:31] will retry after 625.589854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.268385   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:38.268385   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:38.271420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:38.521873   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:38.599872   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.599872   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.599872   10364 retry.go:31] will retry after 1.272749266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.621006   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:38.701213   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.701287   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.701287   10364 retry.go:31] will retry after 729.524766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.272125   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:39.272125   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:39.274907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:39.436175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:39.531183   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.531183   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.531183   10364 retry.go:31] will retry after 993.07118ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.877780   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:39.947906   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.950459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.950459   10364 retry.go:31] will retry after 981.929326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.275982   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:40.275982   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:40.278602   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:40.529721   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:40.604194   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:40.610090   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.610090   10364 retry.go:31] will retry after 3.313570586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.937823   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:41.010101   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:41.013448   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.013448   10364 retry.go:31] will retry after 3.983327016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.279217   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:41.279217   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:41.282049   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:42.282642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:42.282642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:42.285895   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.285957   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:43.285957   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:43.289436   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.928516   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:44.010824   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:44.016536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.016536   10364 retry.go:31] will retry after 3.387443088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.290770   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:44.290770   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:44.293999   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:45.002652   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:45.076704   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:45.080905   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.080905   10364 retry.go:31] will retry after 2.289915246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.294211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:45.294211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:45.297045   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:46.297784   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:46.297784   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.300989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:32:46.300989   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:46.300989   10364 type.go:168] "Request Body" body=""
	I1217 00:32:46.300989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.304308   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.305471   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:47.305471   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:47.308634   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.375936   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:47.409078   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:47.458764   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.458804   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.458804   10364 retry.go:31] will retry after 7.569688135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.484927   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.488464   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.488464   10364 retry.go:31] will retry after 9.157991048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:48.309180   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:48.309180   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:48.312403   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:49.312469   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:49.312469   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:49.315488   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:50.316234   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:50.316234   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:50.319889   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:51.320680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:51.320680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:51.324928   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:32:52.325755   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:52.325755   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:52.328987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:53.329277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:53.329277   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:53.332508   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:54.333122   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:54.333449   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:54.337390   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:55.034235   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:55.110067   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:55.114541   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.114568   10364 retry.go:31] will retry after 11.854567632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.338017   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:55.338017   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:55.341093   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:56.341403   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:56.341403   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.344366   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:32:56.344366   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:56.344366   10364 type.go:168] "Request Body" body=""
	I1217 00:32:56.344898   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.347007   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:56.652443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:56.739536   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:56.739536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:56.739536   10364 retry.go:31] will retry after 10.780280137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:57.347379   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:57.347379   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:57.350807   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:58.351069   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:58.351069   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:58.354096   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:59.354451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:59.354451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:59.357775   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:00.357853   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:00.357853   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:00.362050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:01.362288   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:01.362722   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:01.365594   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:02.365849   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:02.366254   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:02.369208   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:03.369619   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:03.369619   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:03.373087   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:04.373596   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:04.373596   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:04.376267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:05.376901   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:05.376901   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:05.380341   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:06.380779   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:06.380779   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.384486   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:06.384486   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:06.384486   10364 type.go:168] "Request Body" body=""
	I1217 00:33:06.384486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.386883   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:06.975138   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:07.047365   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.053212   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.053212   10364 retry.go:31] will retry after 9.4400792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.388016   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:07.388016   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:07.391682   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:07.525003   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:07.600422   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.604097   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.604097   10364 retry.go:31] will retry after 21.608180779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:08.392667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:08.392667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:08.395310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:09.395626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:09.395626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:09.400417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:10.400757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:10.400757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:10.403934   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:11.404855   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:11.404855   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:11.407439   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:12.407525   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:12.407525   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:12.410864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:13.411229   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:13.411229   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:13.414667   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:14.414815   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:14.414815   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:14.417914   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:15.418400   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:15.418400   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:15.421658   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:16.421803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:16.421803   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.424468   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:16.424468   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:16.425000   10364 type.go:168] "Request Body" body=""
	I1217 00:33:16.425000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.427532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:16.499443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:16.577484   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:16.582973   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:16.583014   10364 retry.go:31] will retry after 31.220452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:17.427856   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:17.427856   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:17.430661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:18.431189   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:18.431189   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:18.434303   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:19.434667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:19.434667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:19.437774   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:20.438018   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:20.438018   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:20.441284   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:21.442005   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:21.442005   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:21.445477   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:22.446517   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:22.446517   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:22.451991   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:33:23.452224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:23.452224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:23.455297   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:24.455662   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:24.455662   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:24.458123   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:25.458634   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:25.458634   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:25.461576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:26.462089   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:26.462563   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.465489   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:26.465489   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:26.465647   10364 type.go:168] "Request Body" body=""
	I1217 00:33:26.465647   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.468381   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:27.469289   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:27.469617   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:27.472277   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:28.472725   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:28.473201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:28.476219   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:29.218035   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:29.290496   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:29.295368   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.295368   10364 retry.go:31] will retry after 28.200848873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.476644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:29.476644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:29.479582   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:30.480382   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:30.480382   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:30.483362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:31.484451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:31.484451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:31.488344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:32.488579   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:32.488579   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:32.491919   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:33.492204   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:33.492204   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:33.494785   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:34.495401   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:34.495401   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:34.499412   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:35.499565   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:35.500315   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:35.503299   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:36.504300   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:36.504300   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.507870   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:36.507973   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:36.508033   10364 type.go:168] "Request Body" body=""
	I1217 00:33:36.508113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.510973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:37.511257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:37.511257   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:37.514688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:38.514936   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:38.514936   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:38.518386   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:39.518923   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:39.518923   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:39.520922   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:33:40.521680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:40.521680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:40.524367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:41.525837   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:41.526267   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:41.528903   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:42.529201   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:42.529201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:42.531842   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:43.532127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:43.532127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:43.534820   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:44.536381   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:44.536381   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:44.539631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:45.540548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:45.540548   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:45.543978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:46.544552   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:46.544552   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.547995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:46.547995   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:46.547995   10364 type.go:168] "Request Body" body=""
	I1217 00:33:46.547995   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.550843   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:47.551203   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:47.551203   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:47.554480   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:47.809190   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:47.891444   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:47.895455   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:47.895455   10364 retry.go:31] will retry after 48.235338214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:48.554744   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:48.554744   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:48.557563   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:49.558144   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:49.558144   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:49.560984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:50.561573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:50.561999   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:50.564681   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:51.564893   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:51.565218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:51.567822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:52.568697   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:52.568697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:52.572043   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:53.572367   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:53.572367   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:53.575543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:54.576655   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:54.576655   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:54.579628   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:55.580688   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:55.580688   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:55.583829   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:56.585061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:56.585061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.589344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:56.589344   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:56.589879   10364 type.go:168] "Request Body" body=""
	I1217 00:33:56.589987   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.592329   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:57.501146   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:57.569298   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:57.571601   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.571601   10364 retry.go:31] will retry after 30.590824936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.593179   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:57.593179   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:57.595184   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:58.596116   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:58.596302   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:58.598982   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:59.599603   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:59.599603   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:59.602661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:00.602875   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:00.603290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:00.606460   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:01.607309   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:01.607677   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:01.609972   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:02.611301   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:02.611301   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:02.614599   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:03.614800   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:03.614800   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:03.618177   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:04.618602   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:04.618996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:04.624198   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:05.625646   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:05.625646   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:05.629762   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:06.630421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:06.630421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.633232   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:06.633232   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:06.633809   10364 type.go:168] "Request Body" body=""
	I1217 00:34:06.633809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.638868   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:07.639683   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:07.639683   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:07.643176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:08.643409   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:08.643409   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:08.646509   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:09.647445   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:09.647445   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:09.650342   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:10.650843   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:10.651408   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:10.653984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:11.654782   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:11.654782   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:11.660510   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:12.661264   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:12.661264   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:12.664725   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:13.665643   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:13.665643   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:13.668534   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:14.669351   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:14.669351   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:14.673188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:15.673306   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:15.673709   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:15.675803   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:16.676778   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:16.676778   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.679773   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:16.679872   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:16.679999   10364 type.go:168] "Request Body" body=""
	I1217 00:34:16.680102   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.682768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:17.683817   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:17.683817   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:17.686822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:18.687027   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:18.687027   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:18.690241   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:19.690694   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:19.690694   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:19.693877   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:20.694298   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:20.694605   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:20.697314   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:21.697742   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:21.697742   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:21.700603   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:22.701210   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:22.701210   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:22.704640   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:23.705172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:23.705172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:23.707560   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:24.708954   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:24.708954   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:24.712011   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:25.712539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:25.712539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:25.717818   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:26.717996   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:26.717996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.721620   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:26.721620   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:26.721620   10364 type.go:168] "Request Body" body=""
	I1217 00:34:26.721620   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.725519   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:27.726686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:27.726686   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:27.729112   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:28.168229   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:34:28.439129   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439129   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439671   10364 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:28.730022   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:28.730022   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:28.732579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:29.733316   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:29.733316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:29.737180   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:30.737898   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:30.738218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:30.740633   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:31.741637   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:31.741637   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:31.744968   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:32.745244   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:32.745244   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:32.748688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:33.749681   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:33.749681   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:33.753864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:34.754458   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:34.754458   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:34.757550   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:35.757989   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:35.757989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:35.762318   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:36.136043   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:34:36.218441   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:36.231181   10364 out.go:179] * Enabled addons: 
	I1217 00:34:36.235148   10364 addons.go:530] duration metric: took 2m0.3003648s for enable addons: enabled=[]
	I1217 00:34:36.762736   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:36.762736   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.765107   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:36.765107   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:36.765107   10364 type.go:168] "Request Body" body=""
	I1217 00:34:36.765638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.768239   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:37.768638   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:37.768638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:37.772263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:38.772833   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:38.772833   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:38.775690   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:39.776860   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:39.776860   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:39.779543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:40.779907   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:40.779907   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:40.782631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:41.783358   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:41.783809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:41.787117   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:42.787421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:42.787421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:42.790478   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:43.791393   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:43.791393   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:43.794768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:44.795719   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:44.795719   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:44.799050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:45.799750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:45.800118   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:45.802333   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:46.802808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:46.802808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.806272   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:46.806272   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:46.806272   10364 type.go:168] "Request Body" body=""
	I1217 00:34:46.806272   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.808808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:47.809106   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:47.809106   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:47.812072   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:48.812377   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:48.812377   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:48.815804   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:49.816160   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:49.816160   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:49.819073   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:50.819687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:50.819687   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:50.824808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:51.825256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:51.825256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:51.827149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:34:52.828172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:52.828172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:52.831194   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:53.831502   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:53.831502   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:53.835949   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:54.836430   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:54.836430   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:54.840704   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:55.840945   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:55.840945   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:55.844273   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:56.844698   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:56.844774   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.847718   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:56.847718   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:56.847718   10364 type.go:168] "Request Body" body=""
	I1217 00:34:56.847718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.850361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:57.850724   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:57.850724   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:57.853992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:58.854839   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:58.854839   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:58.857985   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:59.858686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:59.859048   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:59.863493   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:00.863731   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:00.863731   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:00.867009   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:01.867548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:01.867986   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:01.870485   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:02.870682   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:02.870682   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:02.874134   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:03.874927   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:03.874927   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:03.877992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:04.878757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:04.878757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:04.882012   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:05.882985   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:05.882985   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:05.886320   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:06.887395   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:06.887395   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.890772   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:06.890844   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:06.890844   10364 type.go:168] "Request Body" body=""
	I1217 00:35:06.890844   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.892912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:07.893541   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:07.893541   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:07.897243   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:08.897423   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:08.897423   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:08.901955   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:09.902222   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:09.902222   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:09.905347   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:10.906346   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:10.906346   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:10.909589   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:11.910013   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:11.910424   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:11.913496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:12.913792   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:12.913792   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:12.917334   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:13.917794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:13.917794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:13.920911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:14.921451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:14.921902   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:14.924686   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:15.925539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:15.925539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:15.928618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:16.928871   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:16.928871   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.932364   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:16.932364   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:16.932364   10364 type.go:168] "Request Body" body=""
	I1217 00:35:16.932364   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.935267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:17.936075   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:17.936075   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:17.939252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:18.940390   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:18.940390   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:18.943332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:19.943802   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:19.943802   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:19.946902   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:20.947509   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:20.947882   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:20.949988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:21.950644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:21.950644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:21.954065   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:22.954236   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:22.954236   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:22.958266   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:23.958794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:23.959062   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:23.961451   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:24.962012   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:24.962012   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:24.965125   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:25.965439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:25.965439   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:25.968637   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:26.968810   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:26.968810   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.971892   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:26.971961   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:26.972008   10364 type.go:168] "Request Body" body=""
	I1217 00:35:26.972008   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.977052   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:35:27.977730   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:27.977730   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:27.980941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:28.981406   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:28.981406   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:28.984099   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:29.985140   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:29.985452   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:29.988385   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:30.989318   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:30.989318   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:30.992251   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:31.993148   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:31.993515   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:31.996483   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:32.996803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:32.997153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:32.999821   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:33.999930   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:33.999930   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:34.003148   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:35.003410   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:35.003410   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:35.006455   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:36.008349   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:36.008349   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:36.010952   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:37.011100   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:37.011100   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.014149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:37.014149   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:37.014149   10364 type.go:168] "Request Body" body=""
	I1217 00:35:37.014678   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.016502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:35:38.017464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:38.017464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:38.020305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:39.020641   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:39.020641   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:39.023532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:40.024042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:40.024042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:40.027707   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:41.027942   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:41.027942   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:41.031346   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:42.032292   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:42.032292   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:42.035463   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:43.035799   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:43.036298   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:43.039139   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:44.039453   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:44.039453   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:44.042907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:45.043589   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:45.043589   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:45.046766   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:46.047648   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:46.047648   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:46.051224   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:47.051642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:47.051642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.054716   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:47.054716   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:47.054716   10364 type.go:168] "Request Body" body=""
	I1217 00:35:47.054716   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.056987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:48.058345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:48.058345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:48.061555   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:49.061851   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:49.061851   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:49.065062   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:50.065656   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:50.065933   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:50.068127   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:51.068865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:51.069263   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:51.071479   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:52.072199   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:52.072199   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:52.075414   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:53.076211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:53.076211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:53.079310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:54.079644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:54.079644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:54.083395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:55.083663   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:55.083663   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:55.086632   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:56.087097   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:56.087494   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:56.091591   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:57.091913   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:57.092314   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.095048   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:35:57.095048   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:57.095048   10364 type.go:168] "Request Body" body=""
	I1217 00:35:57.095640   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.098264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:58.098993   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:58.098993   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:58.101747   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:59.103113   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:59.103113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:59.105884   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:00.107028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:00.107028   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:00.109881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:01.110650   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:01.110650   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:01.114650   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:02.114915   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:02.114915   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:02.118186   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:03.118580   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:03.118580   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:03.121988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:04.123025   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:04.123025   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:04.126587   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:05.127042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:05.127451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:05.132256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:06.132687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:06.133104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:06.135375   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:07.137054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:07.137054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.140223   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:07.140223   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:07.140223   10364 type.go:168] "Request Body" body=""
	I1217 00:36:07.140223   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.142965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:08.143629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:08.143629   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:08.147215   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:09.147522   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:09.147522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:09.150564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:10.151061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:10.151061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:10.153608   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:11.154626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:11.154626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:11.157406   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:12.158277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:12.158752   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:12.162911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:13.163269   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:13.163269   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:13.166264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:14.166990   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:14.166990   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:14.171561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:15.171912   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:15.171912   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:15.175056   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:16.176256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:16.176256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:16.179133   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:17.179808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:17.179808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.182925   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:17.182976   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:17.183085   10364 type.go:168] "Request Body" body=""
	I1217 00:36:17.183154   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.186098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:18.186373   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:18.186373   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:18.188978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:19.189978   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:19.189978   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:19.193521   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:20.193758   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:20.194053   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:20.196502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:21.196916   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:21.196916   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:21.200034   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:22.200545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:22.200545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:22.204008   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:23.205276   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:23.205569   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:23.207867   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:24.208451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:24.208451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:24.211642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:25.212042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:25.212042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:25.214973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:26.215279   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:26.215279   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:26.218537   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:27.219034   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:27.219034   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.221530   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:27.221530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:27.222255   10364 type.go:168] "Request Body" body=""
	I1217 00:36:27.222319   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.225150   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:28.225829   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:28.225829   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:28.229281   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:29.229629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:29.229922   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:29.232417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:30.233433   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:30.233433   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:30.236676   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:31.237185   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:31.237185   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:31.240270   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:32.240968   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:32.241316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:32.244151   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:33.244415   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:33.244415   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:33.248305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:34.248592   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:34.248592   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:34.252121   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:35.252241   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:35.252241   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:35.254173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:36.254586   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:36.254586   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:36.257572   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:37.258337   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:37.258337   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.261475   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:37.261475   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:37.262206   10364 type.go:168] "Request Body" body=""
	I1217 00:36:37.262532   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.264961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:38.265631   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:38.265854   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:38.268561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:39.269290   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:39.269290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:39.271879   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:40.272273   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:40.272273   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:40.275242   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:41.276205   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:41.276623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:41.278866   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:42.279206   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:42.279206   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:42.282173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:43.282751   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:43.282751   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:43.285875   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:44.286756   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:44.287077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:44.289831   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:45.290159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:45.290159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:45.293298   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:46.294545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:46.294545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:46.297578   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:47.297935   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:47.297935   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.300692   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:47.300692   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:47.300692   10364 type.go:168] "Request Body" body=""
	I1217 00:36:47.300692   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.302635   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:48.303208   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:48.303208   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:48.306418   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:49.306667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:49.307130   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:49.309815   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:50.310768   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:50.310768   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:50.313618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:51.314224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:51.314224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:51.316809   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:52.317523   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:52.317523   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:52.322067   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:53.322359   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:53.322359   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:53.325176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:54.325549   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:54.325549   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:54.328395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:55.328984   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:55.329339   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:55.334171   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:56.334464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:56.334464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:56.337612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:57.337960   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:57.337960   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.340932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:57.341462   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:57.341593   10364 type.go:168] "Request Body" body=""
	I1217 00:36:57.341654   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.344564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:58.345573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:58.345573   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:58.348987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:59.349186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:59.349186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:59.352680   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:00.353127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:00.353127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:00.355791   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:01.356152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:01.356152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:01.360722   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:02.361585   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:02.362214   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:02.364765   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:03.365485   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:03.365485   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:03.368349   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:04.368821   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:04.368821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:04.371965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:05.372332   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:05.372332   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:05.375376   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:06.376031   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:06.376031   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:06.378850   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:07.380334   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:07.380334   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.383178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:07.383178   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:07.383178   10364 type.go:168] "Request Body" body=""
	I1217 00:37:07.383178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.386449   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:08.387594   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:08.388059   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:08.391028   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:09.391186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:09.391186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:09.394448   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:10.394971   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:10.394971   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:10.399668   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:11.400389   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:11.400389   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:11.403573   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:12.404531   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:12.404531   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:12.407846   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:13.408153   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:13.408153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:13.411907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:14.412175   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:14.412175   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:14.415697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:15.416228   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:15.416228   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:15.419897   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:16.420794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:16.420794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:16.424642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:17.424997   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:17.424997   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.428835   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:17.428983   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:17.428983   10364 type.go:168] "Request Body" body=""
	I1217 00:37:17.428983   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.432188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:18.433366   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:18.433366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:18.437105   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:19.437417   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:19.437866   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:19.443541   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:37:20.444729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:20.444729   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:20.447421   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:21.447798   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:21.447798   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:21.450995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:22.451672   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:22.451672   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:22.454367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:23.455345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:23.455345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:23.458961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:24.459152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:24.459152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:24.462362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:25.462863   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:25.462863   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:25.465098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:26.465439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:26.465821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:26.468832   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:27.469064   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:27.469454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.472358   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:27.472422   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:27.472536   10364 type.go:168] "Request Body" body=""
	I1217 00:37:27.472615   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.475175   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:28.475953   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:28.475953   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:28.479074   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:29.479701   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:29.479701   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:29.482529   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:30.483219   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:30.483219   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:30.486254   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:31.487104   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:31.487104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:31.489733   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:32.490240   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:32.490767   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:32.493579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:33.493807   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:33.494211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:33.497178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:34.497955   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:34.497955   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:34.501263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:35.501483   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:35.501483   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:35.504417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:36.504622   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:36.504622   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:36.508593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:37.509653   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:37.509653   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.512288   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:37.512288   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:37.512424   10364 type.go:168] "Request Body" body=""
	I1217 00:37:37.512522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.514595   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:38.514845   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:38.514845   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:38.517717   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:39.518411   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:39.518411   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:39.520864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:40.521889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:40.521889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:40.525103   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:41.525419   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:41.525419   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:41.528361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:42.528733   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:42.529149   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:42.532111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:43.532896   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:43.532896   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:43.536252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:44.536867   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:44.536867   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:44.540157   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:45.540486   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:45.540486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:45.543711   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:46.543879   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:46.543879   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:46.546377   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:47.546832   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:47.546832   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.550543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:47.550543   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:47.550643   10364 type.go:168] "Request Body" body=""
	I1217 00:37:47.550786   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.552960   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:48.553202   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:48.553202   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:48.558015   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:49.559371   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:49.559371   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:49.562548   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:50.562966   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:50.562966   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:50.565800   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:51.566293   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:51.566623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:51.569597   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:52.570511   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:52.570511   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:52.573392   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:53.573965   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:53.573965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:53.576340   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:54.577062   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:54.577463   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:54.579836   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:55.580473   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:55.580473   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:55.583734   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:56.584454   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:56.584454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:56.587256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:57.588397   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:57.588397   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.593527   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1217 00:37:57.593527   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:57.593527   10364 type.go:168] "Request Body" body=""
	I1217 00:37:57.593527   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.597825   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:58.598550   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:58.598550   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:58.602122   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:59.602444   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:59.602444   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:59.605501   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:00.606096   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:00.606096   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:00.608989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:01.609865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:01.609965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:01.613038   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:02.613818   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:02.614067   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:02.617196   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:03.617950   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:03.618366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:03.621156   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:04.621587   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:04.621587   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:04.624616   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:05.625123   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:05.625123   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:05.627780   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:06.628169   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:06.628602   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:06.632684   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:38:07.633450   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:07.633450   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.636697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:07.636697   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:07.636697   10364 type.go:168] "Request Body" body=""
	I1217 00:38:07.636697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.638671   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:08.639000   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:08.639000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:08.642420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:09.642718   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:09.642718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:09.645881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:10.646391   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:10.646391   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:10.649653   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:11.650077   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:11.650077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:11.653855   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:12.654508   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:12.654508   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:12.657918   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:13.658238   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:13.658238   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:13.661446   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:14.661684   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:14.661684   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:14.664655   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:15.665257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:15.665578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:15.672111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1217 00:38:16.672363   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:16.672363   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:16.675593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:17.676054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:17.676054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.679454   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:17.679454   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:17.679454   10364 type.go:168] "Request Body" body=""
	I1217 00:38:17.679454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.681452   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:18.682087   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:18.682087   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:18.685399   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:19.686028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:19.686535   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:19.689161   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:20.689948   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:20.690239   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:20.692554   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:21.693716   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:21.694009   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:21.696661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:22.697780   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:22.697780   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:22.700917   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:23.702225   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:23.702225   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:23.705612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:24.706750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:24.706750   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:24.710496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:25.710729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:25.711065   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:25.713912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:26.714178   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:26.714178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:26.718058   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:27.718245   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:27.718578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.721305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:27.721375   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:27.721441   10364 type.go:168] "Request Body" body=""
	I1217 00:38:27.721441   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.723332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:28.723805   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:28.724207   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:28.727033   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:29.727723   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:29.727723   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:29.730941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:30.731355   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:30.731355   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:30.734083   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:31.734645   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:31.734645   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:31.737932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:32.738159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:32.738159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:32.741332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:33.741889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:33.741889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:33.744576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:34.745133   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:34.745546   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:34.747888   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:35.749177   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:35.749177   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:35.751796   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:36.264530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 00:38:36.264530   10364 node_ready.go:38] duration metric: took 6m0.0004133s for node "functional-409700" to be "Ready" ...
	I1217 00:38:36.268017   10364 out.go:203] 
	W1217 00:38:36.270772   10364 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:38:36.270772   10364 out.go:285] * 
	W1217 00:38:36.272556   10364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:38:36.275101   10364 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065379308Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065401310Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065424712Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065461915Z" level=info msg="Initializing buildkit"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.183346289Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191707575Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191889990Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191902191Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191916192Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:32:32 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:32 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:32:33 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:39:32.745829   18423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:39:32.746915   18423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:39:32.749208   18423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:39:32.751951   18423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:39:32.753072   18423 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000806] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000803] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000826] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000811] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000815] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:32] CPU: 7 PID: 54557 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7f3abb92bb20
	[  +0.000446] Code: Unable to access opcode bytes at RIP 0x7f3abb92baf6.
	[  +0.000672] RSP: 002b:00007ffe2fcb88c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000804] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000788] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000852] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001011] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001269] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001111] FS:  0000000000000000 GS:  0000000000000000
	[  +0.944697] CPU: 4 PID: 54682 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000867] RIP: 0033:0x7fa9cdbc0b20
	[  +0.000408] Code: Unable to access opcode bytes at RIP 0x7fa9cdbc0af6.
	[  +0.000668] RSP: 002b:00007ffde5330df0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001045] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:39:32 up 58 min,  0 user,  load average: 0.46, 0.36, 0.58
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:39:29 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:39:29 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 890.
	Dec 17 00:39:29 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:29 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:29 functional-409700 kubelet[18266]: E1217 00:39:29.998728   18266 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:39:30 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:39:30 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:39:30 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 891.
	Dec 17 00:39:30 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:30 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:30 functional-409700 kubelet[18279]: E1217 00:39:30.740174   18279 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:39:30 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:39:30 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:39:31 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 892.
	Dec 17 00:39:31 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:31 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:31 functional-409700 kubelet[18291]: E1217 00:39:31.497948   18291 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:39:31 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:39:31 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:39:32 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 893.
	Dec 17 00:39:32 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:32 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:39:32 functional-409700 kubelet[18318]: E1217 00:39:32.251539   18318 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:39:32 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:39:32 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (600.8075ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubectlGetPods (53.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (54.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 kubectl -- --context functional-409700 get pods
E1217 00:40:33.696693    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:731: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 kubectl -- --context functional-409700 get pods: exit status 1 (50.5762473s)

                                                
                                                
** stderr ** 
	E1217 00:40:04.475505   10968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:40:14.567590   10968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:40:24.607938   10968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:40:34.648312   10968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:40:44.689733   10968 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:734: failed to get pods. args "out/minikube-windows-amd64.exe -p functional-409700 kubectl -- --context functional-409700 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (661.5408ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.6231353s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-045600 image ls --format yaml --alsologtostderr                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh pgrep buildkitd                                                                                   │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │                     │
	│ image          │ functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr                  │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls                                                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ delete         │ -p functional-045600                                                                                                    │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │ 17 Dec 25 00:23 UTC │
	│ start          │ -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-409700 --alsologtostderr -v=8                                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.1                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.3                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:latest                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add minikube-local-cache-test:functional-409700                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache delete minikube-local-cache-test:functional-409700                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl images                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ cache          │ functional-409700 cache reload                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ kubectl        │ functional-409700 kubectl -- --context functional-409700 get pods                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:32:25
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:32:25.884023   10364 out.go:360] Setting OutFile to fd 1372 ...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.926022   10364 out.go:374] Setting ErrFile to fd 1800...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.940016   10364 out.go:368] Setting JSON to false
	I1217 00:32:25.942016   10364 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3134,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:32:25.942016   10364 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:32:25.946016   10364 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:32:25.948015   10364 notify.go:221] Checking for updates...
	I1217 00:32:25.950019   10364 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:25.952018   10364 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:32:25.955015   10364 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:32:25.957015   10364 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:32:25.960017   10364 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:32:25.964016   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:25.964016   10364 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:32:26.171156   10364 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:32:26.176438   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.427526   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.406486235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.434528   10364 out.go:179] * Using the docker driver based on existing profile
	I1217 00:32:26.436524   10364 start.go:309] selected driver: docker
	I1217 00:32:26.436524   10364 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.436524   10364 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:32:26.442525   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.668518   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.649642613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.752324   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:26.752324   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:26.752324   10364 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.755825   10364 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:32:26.757701   10364 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:32:26.760609   10364 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:32:26.762036   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:26.763103   10364 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:32:26.763103   10364 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:32:26.763103   10364 cache.go:65] Caching tarball of preloaded images
	I1217 00:32:26.763399   10364 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:32:26.763399   10364 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:32:26.763399   10364 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:32:26.840670   10364 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:32:26.840729   10364 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:32:26.840729   10364 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:32:26.840729   10364 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:32:26.840729   10364 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:32:26.840729   10364 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:32:26.840729   10364 fix.go:54] fixHost starting: 
	I1217 00:32:26.848208   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:26.901821   10364 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:32:26.901821   10364 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:32:26.907276   10364 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:32:26.907373   10364 machine.go:94] provisionDockerMachine start ...
	I1217 00:32:26.910817   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:26.967003   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:26.967068   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:26.967068   10364 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:32:27.152656   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.152656   10364 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:32:27.156074   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.214234   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.214712   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.214757   10364 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:32:27.407594   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.413090   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.490102   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.490703   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.490749   10364 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:32:27.672866   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:27.672866   10364 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:32:27.672866   10364 ubuntu.go:190] setting up certificates
	I1217 00:32:27.672866   10364 provision.go:84] configureAuth start
	I1217 00:32:27.676807   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:27.732901   10364 provision.go:143] copyHostCerts
	I1217 00:32:27.733091   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:32:27.733091   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:32:27.734330   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:32:27.734382   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:32:27.735088   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1217 00:32:27.735088   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:32:27.735088   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:32:27.735728   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:32:27.736339   10364 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:32:27.847670   10364 provision.go:177] copyRemoteCerts
	I1217 00:32:27.851712   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:32:27.854410   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.907971   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:28.027015   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1217 00:32:28.027015   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:32:28.064351   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1217 00:32:28.064351   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:32:28.092479   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1217 00:32:28.092479   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:32:28.124650   10364 provision.go:87] duration metric: took 451.7801ms to configureAuth
	I1217 00:32:28.124650   10364 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:32:28.125238   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:28.128674   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.184894   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.185614   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.185614   10364 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:32:28.351273   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:32:28.351273   10364 ubuntu.go:71] root file system type: overlay
	I1217 00:32:28.351273   10364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:32:28.355630   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.410840   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.411043   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.411043   10364 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:32:28.608128   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:32:28.612284   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.672356   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.672356   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.672356   10364 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:32:28.839586   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:28.839640   10364 machine.go:97] duration metric: took 1.9322227s to provisionDockerMachine
	I1217 00:32:28.839640   10364 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:32:28.839640   10364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:32:28.845012   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:32:28.847117   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.904187   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.040693   10364 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:32:29.050158   10364 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_ID="12"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:32:29.050158   10364 command_runner.go:130] > ID=debian
	I1217 00:32:29.050158   10364 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:32:29.050158   10364 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:32:29.050158   10364 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:32:29.050158   10364 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:32:29.050158   10364 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:32:29.050833   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:32:29.050833   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /etc/ssl/certs/41682.pem
	I1217 00:32:29.051707   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:32:29.051707   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> /etc/test/nested/copy/4168/hosts
	I1217 00:32:29.055303   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:32:29.070738   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:32:29.103807   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:32:29.133625   10364 start.go:296] duration metric: took 293.9818ms for postStartSetup
	I1217 00:32:29.137970   10364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:32:29.142249   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.194718   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.311046   10364 command_runner.go:130] > 1%
	I1217 00:32:29.316279   10364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:32:29.324732   10364 command_runner.go:130] > 950G
	I1217 00:32:29.324732   10364 fix.go:56] duration metric: took 2.4839807s for fixHost
	I1217 00:32:29.324732   10364 start.go:83] releasing machines lock for "functional-409700", held for 2.4839807s
	I1217 00:32:29.330157   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:29.384617   10364 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:32:29.388675   10364 ssh_runner.go:195] Run: cat /version.json
	I1217 00:32:29.388675   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.392044   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.442282   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.464827   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.558946   10364 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1217 00:32:29.559478   10364 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:32:29.581467   10364 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:32:29.585625   10364 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:29.598125   10364 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:32:29.598125   10364 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:32:29.602648   10364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:32:29.614417   10364 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:32:29.615099   10364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:32:29.621960   10364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:32:29.646439   10364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:32:29.646439   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:29.646439   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:29.646439   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:29.668226   10364 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:32:29.672516   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:32:29.695799   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:32:29.710451   10364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:32:29.715117   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 00:32:29.723829   10364 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:32:29.723829   10364 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:32:29.737249   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.756347   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:32:29.779698   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.801679   10364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:32:29.825863   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:32:29.844752   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:32:29.865139   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:32:29.885382   10364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:32:29.900142   10364 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:32:29.904180   10364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:32:29.922078   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:30.133548   10364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:32:30.412249   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:30.412298   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:30.416670   10364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Unit]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Description=Docker Application Container Engine
	I1217 00:32:30.435945   10364 command_runner.go:130] > Documentation=https://docs.docker.com
	I1217 00:32:30.435945   10364 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1217 00:32:30.435945   10364 command_runner.go:130] > Wants=network-online.target containerd.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > Requires=docker.socket
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitBurst=3
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitIntervalSec=60
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Service]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Type=notify
	I1217 00:32:30.435945   10364 command_runner.go:130] > Restart=always
	I1217 00:32:30.435945   10364 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1217 00:32:30.435945   10364 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1217 00:32:30.435945   10364 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1217 00:32:30.435945   10364 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1217 00:32:30.435945   10364 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1217 00:32:30.435945   10364 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1217 00:32:30.435945   10364 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1217 00:32:30.435945   10364 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNOFILE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNPROC=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitCORE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1217 00:32:30.435945   10364 command_runner.go:130] > TasksMax=infinity
	I1217 00:32:30.437404   10364 command_runner.go:130] > TimeoutStartSec=0
	I1217 00:32:30.437404   10364 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1217 00:32:30.437404   10364 command_runner.go:130] > Delegate=yes
	I1217 00:32:30.437404   10364 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1217 00:32:30.437404   10364 command_runner.go:130] > KillMode=process
	I1217 00:32:30.437404   10364 command_runner.go:130] > OOMScoreAdjust=-500
	I1217 00:32:30.437404   10364 command_runner.go:130] > [Install]
	I1217 00:32:30.437404   10364 command_runner.go:130] > WantedBy=multi-user.target
	I1217 00:32:30.443833   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.468114   10364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:32:30.542786   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.567969   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:32:30.586631   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:30.606342   10364 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1217 00:32:30.611878   10364 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:32:30.618659   10364 command_runner.go:130] > /usr/bin/cri-dockerd
	I1217 00:32:30.623279   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:32:30.636760   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:32:30.661689   10364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:32:30.828747   10364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:32:30.988536   10364 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:32:30.988536   10364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:32:31.016800   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:32:31.041396   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:31.178126   10364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:32:32.195651   10364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0175164s)
	I1217 00:32:32.199801   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:32:32.224938   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:32:32.247199   10364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:32:32.275016   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:32.297360   10364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:32:32.448301   10364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:32:32.597398   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.739627   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:32:32.765463   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:32:32.790341   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.929296   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:32:33.067092   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:33.087872   10364 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:32:33.092277   10364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:32:33.102122   10364 command_runner.go:130] > Device: 0,112	Inode: 1758        Links: 1
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Modify: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Change: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.103099   10364 command_runner.go:130] >  Birth: -
	I1217 00:32:33.103099   10364 start.go:564] Will wait 60s for crictl version
	I1217 00:32:33.106627   10364 ssh_runner.go:195] Run: which crictl
	I1217 00:32:33.116038   10364 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:32:33.119921   10364 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:32:33.163697   10364 command_runner.go:130] > Version:  0.1.0
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeName:  docker
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:32:33.163697   10364 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:32:33.167790   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.207644   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.212842   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.256029   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.258896   10364 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:32:33.262892   10364 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:32:33.463377   10364 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:32:33.467155   10364 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:32:33.475542   10364 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1217 00:32:33.478907   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:33.533350   10364 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:32:33.533350   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:33.537278   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.575248   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.575248   10364 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:32:33.579121   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.614970   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.615141   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.615171   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.615171   10364 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:32:33.615171   10364 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:32:33.615349   10364 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:32:33.618510   10364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:32:34.052354   10364 command_runner.go:130] > cgroupfs
	I1217 00:32:34.052472   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:34.052529   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:34.052529   10364 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:32:34.052529   10364 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:32:34.052529   10364 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:32:34.056808   10364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:32:34.073105   10364 command_runner.go:130] > kubeadm
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubectl
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubelet
	I1217 00:32:34.073240   10364 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:32:34.077459   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:32:34.090893   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:32:34.114750   10364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:32:34.135531   10364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 00:32:34.159985   10364 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:32:34.168280   10364 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:32:34.172492   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:34.310890   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:34.700023   10364 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:32:34.700115   10364 certs.go:195] generating shared ca certs ...
	I1217 00:32:34.700115   10364 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:34.700485   10364 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:32:34.701055   10364 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:32:34.701055   10364 certs.go:257] generating profile certs ...
	I1217 00:32:34.701864   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:32:34.702120   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:32:34.702437   10364 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:32:34.702487   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:32:34.702646   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:32:34.703540   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:32:34.703598   10364 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:32:34.704137   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:32:34.704439   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:34.704970   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem -> /usr/share/ca-certificates/4168.pem
	I1217 00:32:34.705196   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /usr/share/ca-certificates/41682.pem
	I1217 00:32:34.706089   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:32:34.736497   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:32:34.769712   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:32:34.802984   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:32:34.830525   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:32:34.860563   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:32:34.889179   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:32:34.920536   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:32:34.947027   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:32:34.978500   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:32:35.008458   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:32:35.040774   10364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:32:35.063574   10364 ssh_runner.go:195] Run: openssl version
	I1217 00:32:35.083169   10364 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:32:35.087374   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.105491   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:32:35.130590   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.144343   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.192130   10364 command_runner.go:130] > b5213941
	I1217 00:32:35.199882   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:32:35.220625   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.238544   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:32:35.259065   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266549   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266638   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.271223   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.315698   10364 command_runner.go:130] > 51391683
	I1217 00:32:35.322687   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:32:35.339650   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.358290   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:32:35.374891   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.387660   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.431595   10364 command_runner.go:130] > 3ec20f2e
	I1217 00:32:35.436891   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:32:35.453526   10364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:32:35.462183   10364 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: 2025-12-17 00:28:21.018933524 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Modify: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Change: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] >  Birth: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.466206   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:32:35.509324   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.514900   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:32:35.558615   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.563444   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:32:35.608112   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.612517   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:32:35.657914   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.662797   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:32:35.707243   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.713694   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:32:35.760477   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.761002   10364 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:35.764353   10364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:32:35.796231   10364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:32:35.810900   10364 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:32:35.810996   10364 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:32:35.810996   10364 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:32:35.815318   10364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:32:35.828811   10364 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:32:35.832840   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:35.889236   10364 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-409700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.889236   10364 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-409700" cluster setting kubeconfig missing "functional-409700" context setting]
	I1217 00:32:35.889236   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.906814   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.907042   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:35.908414   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:32:35.912354   10364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:32:35.931570   10364 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 00:32:35.931672   10364 kubeadm.go:602] duration metric: took 120.6751ms to restartPrimaryControlPlane
	I1217 00:32:35.931672   10364 kubeadm.go:403] duration metric: took 170.6688ms to StartCluster
	I1217 00:32:35.931672   10364 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.931672   10364 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.932861   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.933736   10364 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 00:32:35.933736   10364 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:32:35.933901   10364 addons.go:70] Setting storage-provisioner=true in profile "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:239] Setting addon storage-provisioner=true in "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:70] Setting default-storageclass=true in profile "functional-409700"
	I1217 00:32:35.934051   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:35.934098   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:35.934098   10364 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-409700"
	I1217 00:32:35.936531   10364 out.go:179] * Verifying Kubernetes components...
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.944620   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:36.000654   10364 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:36.002654   10364 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.002654   10364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:32:36.005647   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.010648   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:36.011652   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:36.012648   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:36.012648   10364 addons.go:239] Setting addon default-storageclass=true in "functional-409700"
	I1217 00:32:36.012648   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:36.019655   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:36.056654   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.069645   10364 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.069645   10364 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:32:36.072658   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.098645   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:36.122646   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.187680   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.202921   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.260682   10364 node_ready.go:35] waiting up to 6m0s for node "functional-409700" to be "Ready" ...
	I1217 00:32:36.260849   10364 type.go:168] "Request Body" body=""
	I1217 00:32:36.261061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:36.264195   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:36.265260   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.336693   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.340106   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.340627   10364 retry.go:31] will retry after 202.939607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.388976   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.393288   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.393853   10364 retry.go:31] will retry after 227.289762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.548879   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.622050   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.626260   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626260   10364 retry.go:31] will retry after 395.113457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626489   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.698520   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.702459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.702459   10364 retry.go:31] will retry after 468.39049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.026805   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.111151   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.116224   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.116762   10364 retry.go:31] will retry after 792.119284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.177175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.249858   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.255359   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.255359   10364 retry.go:31] will retry after 596.241339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.265542   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:37.265542   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:37.267933   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:37.856198   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.913554   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.941640   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.944331   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.944331   10364 retry.go:31] will retry after 571.98292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.986334   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.989310   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.989310   10364 retry.go:31] will retry after 625.589854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.268385   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:38.268385   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:38.271420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:38.521873   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:38.599872   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.599872   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.599872   10364 retry.go:31] will retry after 1.272749266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.621006   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:38.701213   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.701287   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.701287   10364 retry.go:31] will retry after 729.524766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.272125   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:39.272125   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:39.274907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:39.436175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:39.531183   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.531183   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.531183   10364 retry.go:31] will retry after 993.07118ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.877780   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:39.947906   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.950459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.950459   10364 retry.go:31] will retry after 981.929326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.275982   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:40.275982   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:40.278602   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:40.529721   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:40.604194   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:40.610090   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.610090   10364 retry.go:31] will retry after 3.313570586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.937823   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:41.010101   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:41.013448   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.013448   10364 retry.go:31] will retry after 3.983327016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.279217   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:41.279217   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:41.282049   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:42.282642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:42.282642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:42.285895   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.285957   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:43.285957   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:43.289436   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.928516   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:44.010824   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:44.016536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.016536   10364 retry.go:31] will retry after 3.387443088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.290770   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:44.290770   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:44.293999   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:45.002652   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:45.076704   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:45.080905   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.080905   10364 retry.go:31] will retry after 2.289915246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.294211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:45.294211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:45.297045   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:46.297784   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:46.297784   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.300989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:32:46.300989   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:46.300989   10364 type.go:168] "Request Body" body=""
	I1217 00:32:46.300989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.304308   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.305471   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:47.305471   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:47.308634   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.375936   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:47.409078   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:47.458764   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.458804   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.458804   10364 retry.go:31] will retry after 7.569688135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.484927   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.488464   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.488464   10364 retry.go:31] will retry after 9.157991048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:48.309180   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:48.309180   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:48.312403   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:49.312469   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:49.312469   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:49.315488   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:50.316234   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:50.316234   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:50.319889   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:51.320680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:51.320680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:51.324928   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:32:52.325755   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:52.325755   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:52.328987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:53.329277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:53.329277   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:53.332508   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:54.333122   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:54.333449   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:54.337390   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:55.034235   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:55.110067   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:55.114541   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.114568   10364 retry.go:31] will retry after 11.854567632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.338017   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:55.338017   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:55.341093   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:56.341403   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:56.341403   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.344366   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:32:56.344366   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:56.344366   10364 type.go:168] "Request Body" body=""
	I1217 00:32:56.344898   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.347007   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:56.652443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:56.739536   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:56.739536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:56.739536   10364 retry.go:31] will retry after 10.780280137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:57.347379   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:57.347379   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:57.350807   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:58.351069   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:58.351069   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:58.354096   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:59.354451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:59.354451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:59.357775   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:00.357853   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:00.357853   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:00.362050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:01.362288   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:01.362722   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:01.365594   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:02.365849   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:02.366254   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:02.369208   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:03.369619   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:03.369619   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:03.373087   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:04.373596   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:04.373596   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:04.376267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:05.376901   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:05.376901   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:05.380341   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:06.380779   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:06.380779   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.384486   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:06.384486   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:06.384486   10364 type.go:168] "Request Body" body=""
	I1217 00:33:06.384486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.386883   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:06.975138   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:07.047365   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.053212   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.053212   10364 retry.go:31] will retry after 9.4400792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.388016   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:07.388016   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:07.391682   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:07.525003   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:07.600422   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.604097   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.604097   10364 retry.go:31] will retry after 21.608180779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:08.392667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:08.392667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:08.395310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:09.395626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:09.395626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:09.400417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:10.400757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:10.400757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:10.403934   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:11.404855   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:11.404855   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:11.407439   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:12.407525   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:12.407525   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:12.410864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:13.411229   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:13.411229   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:13.414667   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:14.414815   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:14.414815   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:14.417914   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:15.418400   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:15.418400   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:15.421658   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:16.421803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:16.421803   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.424468   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:16.424468   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:16.425000   10364 type.go:168] "Request Body" body=""
	I1217 00:33:16.425000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.427532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:16.499443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:16.577484   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:16.582973   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:16.583014   10364 retry.go:31] will retry after 31.220452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:17.427856   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:17.427856   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:17.430661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:18.431189   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:18.431189   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:18.434303   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:19.434667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:19.434667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:19.437774   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:20.438018   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:20.438018   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:20.441284   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:21.442005   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:21.442005   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:21.445477   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:22.446517   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:22.446517   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:22.451991   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:33:23.452224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:23.452224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:23.455297   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:24.455662   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:24.455662   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:24.458123   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:25.458634   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:25.458634   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:25.461576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:26.462089   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:26.462563   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.465489   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:26.465489   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:26.465647   10364 type.go:168] "Request Body" body=""
	I1217 00:33:26.465647   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.468381   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:27.469289   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:27.469617   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:27.472277   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:28.472725   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:28.473201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:28.476219   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:29.218035   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:29.290496   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:29.295368   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.295368   10364 retry.go:31] will retry after 28.200848873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.476644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:29.476644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:29.479582   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:30.480382   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:30.480382   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:30.483362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:31.484451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:31.484451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:31.488344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:32.488579   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:32.488579   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:32.491919   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:33.492204   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:33.492204   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:33.494785   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:34.495401   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:34.495401   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:34.499412   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:35.499565   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:35.500315   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:35.503299   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:36.504300   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:36.504300   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.507870   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:36.507973   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:36.508033   10364 type.go:168] "Request Body" body=""
	I1217 00:33:36.508113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.510973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:37.511257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:37.511257   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:37.514688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:38.514936   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:38.514936   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:38.518386   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:39.518923   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:39.518923   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:39.520922   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:33:40.521680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:40.521680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:40.524367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:41.525837   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:41.526267   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:41.528903   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:42.529201   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:42.529201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:42.531842   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:43.532127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:43.532127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:43.534820   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:44.536381   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:44.536381   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:44.539631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:45.540548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:45.540548   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:45.543978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:46.544552   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:46.544552   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.547995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:46.547995   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:46.547995   10364 type.go:168] "Request Body" body=""
	I1217 00:33:46.547995   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.550843   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:47.551203   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:47.551203   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:47.554480   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:47.809190   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:47.891444   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:47.895455   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:47.895455   10364 retry.go:31] will retry after 48.235338214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:48.554744   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:48.554744   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:48.557563   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:49.558144   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:49.558144   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:49.560984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:50.561573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:50.561999   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:50.564681   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:51.564893   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:51.565218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:51.567822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:52.568697   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:52.568697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:52.572043   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:53.572367   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:53.572367   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:53.575543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:54.576655   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:54.576655   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:54.579628   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:55.580688   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:55.580688   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:55.583829   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:56.585061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:56.585061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.589344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:56.589344   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:56.589879   10364 type.go:168] "Request Body" body=""
	I1217 00:33:56.589987   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.592329   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:57.501146   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:57.569298   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:57.571601   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.571601   10364 retry.go:31] will retry after 30.590824936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.593179   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:57.593179   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:57.595184   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:58.596116   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:58.596302   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:58.598982   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:59.599603   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:59.599603   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:59.602661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:00.602875   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:00.603290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:00.606460   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:01.607309   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:01.607677   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:01.609972   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:02.611301   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:02.611301   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:02.614599   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:03.614800   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:03.614800   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:03.618177   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:04.618602   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:04.618996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:04.624198   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:05.625646   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:05.625646   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:05.629762   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:06.630421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:06.630421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.633232   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:06.633232   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:06.633809   10364 type.go:168] "Request Body" body=""
	I1217 00:34:06.633809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.638868   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:07.639683   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:07.639683   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:07.643176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:08.643409   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:08.643409   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:08.646509   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:09.647445   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:09.647445   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:09.650342   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:10.650843   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:10.651408   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:10.653984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:11.654782   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:11.654782   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:11.660510   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:12.661264   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:12.661264   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:12.664725   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:13.665643   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:13.665643   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:13.668534   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:14.669351   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:14.669351   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:14.673188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:15.673306   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:15.673709   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:15.675803   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:16.676778   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:16.676778   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.679773   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:16.679872   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:16.679999   10364 type.go:168] "Request Body" body=""
	I1217 00:34:16.680102   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.682768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:17.683817   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:17.683817   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:17.686822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:18.687027   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:18.687027   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:18.690241   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:19.690694   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:19.690694   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:19.693877   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:20.694298   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:20.694605   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:20.697314   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:21.697742   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:21.697742   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:21.700603   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:22.701210   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:22.701210   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:22.704640   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:23.705172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:23.705172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:23.707560   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:24.708954   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:24.708954   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:24.712011   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:25.712539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:25.712539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:25.717818   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:26.717996   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:26.717996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.721620   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:26.721620   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:26.721620   10364 type.go:168] "Request Body" body=""
	I1217 00:34:26.721620   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.725519   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:27.726686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:27.726686   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:27.729112   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:28.168229   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:34:28.439129   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439129   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439671   10364 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:28.730022   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:28.730022   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:28.732579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:29.733316   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:29.733316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:29.737180   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:30.737898   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:30.738218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:30.740633   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:31.741637   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:31.741637   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:31.744968   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:32.745244   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:32.745244   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:32.748688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:33.749681   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:33.749681   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:33.753864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:34.754458   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:34.754458   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:34.757550   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:35.757989   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:35.757989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:35.762318   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:36.136043   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:34:36.218441   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:36.231181   10364 out.go:179] * Enabled addons: 
	I1217 00:34:36.235148   10364 addons.go:530] duration metric: took 2m0.3003648s for enable addons: enabled=[]
	I1217 00:34:36.762736   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:36.762736   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.765107   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:36.765107   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:36.765107   10364 type.go:168] "Request Body" body=""
	I1217 00:34:36.765638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.768239   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:37.768638   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:37.768638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:37.772263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:38.772833   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:38.772833   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:38.775690   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:39.776860   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:39.776860   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:39.779543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:40.779907   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:40.779907   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:40.782631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:41.783358   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:41.783809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:41.787117   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:42.787421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:42.787421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:42.790478   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:43.791393   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:43.791393   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:43.794768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:44.795719   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:44.795719   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:44.799050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:45.799750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:45.800118   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:45.802333   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:46.802808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:46.802808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.806272   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:46.806272   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:46.806272   10364 type.go:168] "Request Body" body=""
	I1217 00:34:46.806272   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.808808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:47.809106   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:47.809106   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:47.812072   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:48.812377   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:48.812377   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:48.815804   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:49.816160   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:49.816160   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:49.819073   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:50.819687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:50.819687   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:50.824808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:51.825256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:51.825256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:51.827149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:34:52.828172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:52.828172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:52.831194   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:53.831502   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:53.831502   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:53.835949   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:54.836430   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:54.836430   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:54.840704   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:55.840945   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:55.840945   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:55.844273   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:56.844698   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:56.844774   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.847718   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:56.847718   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:56.847718   10364 type.go:168] "Request Body" body=""
	I1217 00:34:56.847718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.850361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:57.850724   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:57.850724   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:57.853992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:58.854839   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:58.854839   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:58.857985   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:59.858686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:59.859048   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:59.863493   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:00.863731   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:00.863731   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:00.867009   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:01.867548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:01.867986   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:01.870485   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:02.870682   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:02.870682   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:02.874134   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:03.874927   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:03.874927   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:03.877992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:04.878757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:04.878757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:04.882012   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:05.882985   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:05.882985   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:05.886320   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:06.887395   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:06.887395   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.890772   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:06.890844   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:06.890844   10364 type.go:168] "Request Body" body=""
	I1217 00:35:06.890844   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.892912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:07.893541   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:07.893541   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:07.897243   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:08.897423   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:08.897423   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:08.901955   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:09.902222   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:09.902222   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:09.905347   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:10.906346   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:10.906346   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:10.909589   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:11.910013   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:11.910424   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:11.913496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:12.913792   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:12.913792   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:12.917334   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:13.917794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:13.917794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:13.920911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:14.921451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:14.921902   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:14.924686   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:15.925539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:15.925539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:15.928618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:16.928871   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:16.928871   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.932364   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:16.932364   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:16.932364   10364 type.go:168] "Request Body" body=""
	I1217 00:35:16.932364   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.935267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:17.936075   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:17.936075   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:17.939252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:18.940390   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:18.940390   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:18.943332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:19.943802   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:19.943802   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:19.946902   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:20.947509   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:20.947882   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:20.949988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:21.950644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:21.950644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:21.954065   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:22.954236   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:22.954236   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:22.958266   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:23.958794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:23.959062   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:23.961451   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:24.962012   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:24.962012   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:24.965125   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:25.965439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:25.965439   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:25.968637   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:26.968810   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:26.968810   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.971892   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:26.971961   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:26.972008   10364 type.go:168] "Request Body" body=""
	I1217 00:35:26.972008   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.977052   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:35:27.977730   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:27.977730   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:27.980941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:28.981406   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:28.981406   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:28.984099   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:29.985140   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:29.985452   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:29.988385   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:30.989318   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:30.989318   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:30.992251   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:31.993148   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:31.993515   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:31.996483   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:32.996803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:32.997153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:32.999821   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:33.999930   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:33.999930   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:34.003148   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:35.003410   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:35.003410   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:35.006455   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:36.008349   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:36.008349   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:36.010952   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:37.011100   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:37.011100   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.014149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:37.014149   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:37.014149   10364 type.go:168] "Request Body" body=""
	I1217 00:35:37.014678   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.016502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:35:38.017464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:38.017464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:38.020305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:39.020641   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:39.020641   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:39.023532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:40.024042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:40.024042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:40.027707   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:41.027942   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:41.027942   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:41.031346   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:42.032292   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:42.032292   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:42.035463   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:43.035799   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:43.036298   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:43.039139   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:44.039453   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:44.039453   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:44.042907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:45.043589   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:45.043589   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:45.046766   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:46.047648   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:46.047648   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:46.051224   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:47.051642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:47.051642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.054716   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:47.054716   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:47.054716   10364 type.go:168] "Request Body" body=""
	I1217 00:35:47.054716   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.056987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:48.058345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:48.058345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:48.061555   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:49.061851   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:49.061851   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:49.065062   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:50.065656   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:50.065933   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:50.068127   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:51.068865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:51.069263   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:51.071479   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:52.072199   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:52.072199   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:52.075414   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:53.076211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:53.076211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:53.079310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:54.079644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:54.079644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:54.083395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:55.083663   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:55.083663   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:55.086632   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:56.087097   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:56.087494   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:56.091591   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:57.091913   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:57.092314   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.095048   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:35:57.095048   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:57.095048   10364 type.go:168] "Request Body" body=""
	I1217 00:35:57.095640   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.098264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:58.098993   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:58.098993   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:58.101747   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:59.103113   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:59.103113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:59.105884   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:00.107028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:00.107028   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:00.109881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:01.110650   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:01.110650   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:01.114650   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:02.114915   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:02.114915   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:02.118186   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:03.118580   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:03.118580   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:03.121988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:04.123025   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:04.123025   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:04.126587   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:05.127042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:05.127451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:05.132256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:06.132687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:06.133104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:06.135375   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:07.137054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:07.137054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.140223   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:07.140223   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:07.140223   10364 type.go:168] "Request Body" body=""
	I1217 00:36:07.140223   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.142965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:08.143629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:08.143629   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:08.147215   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:09.147522   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:09.147522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:09.150564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:10.151061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:10.151061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:10.153608   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:11.154626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:11.154626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:11.157406   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:12.158277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:12.158752   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:12.162911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:13.163269   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:13.163269   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:13.166264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:14.166990   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:14.166990   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:14.171561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:15.171912   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:15.171912   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:15.175056   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:16.176256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:16.176256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:16.179133   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:17.179808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:17.179808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.182925   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:17.182976   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:17.183085   10364 type.go:168] "Request Body" body=""
	I1217 00:36:17.183154   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.186098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:18.186373   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:18.186373   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:18.188978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:19.189978   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:19.189978   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:19.193521   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:20.193758   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:20.194053   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:20.196502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:21.196916   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:21.196916   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:21.200034   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:22.200545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:22.200545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:22.204008   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:23.205276   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:23.205569   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:23.207867   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:24.208451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:24.208451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:24.211642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:25.212042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:25.212042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:25.214973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:26.215279   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:26.215279   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:26.218537   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:27.219034   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:27.219034   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.221530   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:27.221530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:27.222255   10364 type.go:168] "Request Body" body=""
	I1217 00:36:27.222319   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.225150   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:28.225829   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:28.225829   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:28.229281   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:29.229629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:29.229922   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:29.232417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:30.233433   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:30.233433   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:30.236676   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:31.237185   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:31.237185   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:31.240270   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:32.240968   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:32.241316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:32.244151   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:33.244415   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:33.244415   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:33.248305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:34.248592   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:34.248592   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:34.252121   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:35.252241   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:35.252241   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:35.254173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:36.254586   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:36.254586   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:36.257572   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:37.258337   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:37.258337   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.261475   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:37.261475   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:37.262206   10364 type.go:168] "Request Body" body=""
	I1217 00:36:37.262532   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.264961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:38.265631   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:38.265854   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:38.268561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:39.269290   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:39.269290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:39.271879   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:40.272273   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:40.272273   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:40.275242   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:41.276205   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:41.276623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:41.278866   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:42.279206   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:42.279206   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:42.282173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:43.282751   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:43.282751   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:43.285875   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:44.286756   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:44.287077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:44.289831   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:45.290159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:45.290159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:45.293298   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:46.294545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:46.294545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:46.297578   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:47.297935   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:47.297935   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.300692   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:47.300692   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:47.300692   10364 type.go:168] "Request Body" body=""
	I1217 00:36:47.300692   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.302635   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:48.303208   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:48.303208   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:48.306418   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:49.306667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:49.307130   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:49.309815   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:50.310768   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:50.310768   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:50.313618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:51.314224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:51.314224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:51.316809   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:52.317523   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:52.317523   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:52.322067   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:53.322359   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:53.322359   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:53.325176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:54.325549   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:54.325549   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:54.328395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:55.328984   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:55.329339   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:55.334171   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:56.334464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:56.334464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:56.337612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:57.337960   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:57.337960   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.340932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:57.341462   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:57.341593   10364 type.go:168] "Request Body" body=""
	I1217 00:36:57.341654   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.344564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:58.345573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:58.345573   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:58.348987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:59.349186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:59.349186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:59.352680   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:00.353127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:00.353127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:00.355791   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:01.356152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:01.356152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:01.360722   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:02.361585   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:02.362214   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:02.364765   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:03.365485   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:03.365485   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:03.368349   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:04.368821   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:04.368821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:04.371965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:05.372332   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:05.372332   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:05.375376   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:06.376031   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:06.376031   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:06.378850   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:07.380334   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:07.380334   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.383178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:07.383178   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:07.383178   10364 type.go:168] "Request Body" body=""
	I1217 00:37:07.383178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.386449   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:08.387594   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:08.388059   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:08.391028   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:09.391186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:09.391186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:09.394448   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:10.394971   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:10.394971   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:10.399668   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:11.400389   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:11.400389   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:11.403573   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:12.404531   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:12.404531   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:12.407846   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:13.408153   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:13.408153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:13.411907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:14.412175   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:14.412175   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:14.415697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:15.416228   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:15.416228   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:15.419897   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:16.420794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:16.420794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:16.424642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:17.424997   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:17.424997   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.428835   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:17.428983   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:17.428983   10364 type.go:168] "Request Body" body=""
	I1217 00:37:17.428983   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.432188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:18.433366   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:18.433366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:18.437105   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:19.437417   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:19.437866   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:19.443541   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:37:20.444729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:20.444729   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:20.447421   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:21.447798   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:21.447798   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:21.450995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:22.451672   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:22.451672   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:22.454367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:23.455345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:23.455345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:23.458961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:24.459152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:24.459152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:24.462362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:25.462863   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:25.462863   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:25.465098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:26.465439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:26.465821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:26.468832   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:27.469064   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:27.469454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.472358   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:27.472422   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:27.472536   10364 type.go:168] "Request Body" body=""
	I1217 00:37:27.472615   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.475175   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:28.475953   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:28.475953   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:28.479074   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:29.479701   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:29.479701   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:29.482529   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:30.483219   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:30.483219   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:30.486254   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:31.487104   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:31.487104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:31.489733   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:32.490240   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:32.490767   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:32.493579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:33.493807   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:33.494211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:33.497178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:34.497955   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:34.497955   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:34.501263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:35.501483   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:35.501483   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:35.504417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:36.504622   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:36.504622   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:36.508593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:37.509653   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:37.509653   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.512288   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:37.512288   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:37.512424   10364 type.go:168] "Request Body" body=""
	I1217 00:37:37.512522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.514595   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:38.514845   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:38.514845   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:38.517717   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:39.518411   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:39.518411   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:39.520864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:40.521889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:40.521889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:40.525103   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:41.525419   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:41.525419   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:41.528361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:42.528733   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:42.529149   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:42.532111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:43.532896   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:43.532896   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:43.536252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:44.536867   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:44.536867   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:44.540157   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:45.540486   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:45.540486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:45.543711   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:46.543879   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:46.543879   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:46.546377   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:47.546832   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:47.546832   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.550543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:47.550543   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:47.550643   10364 type.go:168] "Request Body" body=""
	I1217 00:37:47.550786   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.552960   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:48.553202   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:48.553202   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:48.558015   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:49.559371   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:49.559371   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:49.562548   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:50.562966   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:50.562966   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:50.565800   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:51.566293   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:51.566623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:51.569597   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:52.570511   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:52.570511   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:52.573392   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:53.573965   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:53.573965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:53.576340   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:54.577062   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:54.577463   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:54.579836   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:55.580473   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:55.580473   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:55.583734   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:56.584454   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:56.584454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:56.587256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:57.588397   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:57.588397   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.593527   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1217 00:37:57.593527   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:57.593527   10364 type.go:168] "Request Body" body=""
	I1217 00:37:57.593527   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.597825   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:58.598550   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:58.598550   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:58.602122   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:59.602444   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:59.602444   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:59.605501   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:00.606096   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:00.606096   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:00.608989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:01.609865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:01.609965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:01.613038   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:02.613818   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:02.614067   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:02.617196   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:03.617950   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:03.618366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:03.621156   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:04.621587   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:04.621587   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:04.624616   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:05.625123   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:05.625123   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:05.627780   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:06.628169   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:06.628602   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:06.632684   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:38:07.633450   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:07.633450   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.636697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:07.636697   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:07.636697   10364 type.go:168] "Request Body" body=""
	I1217 00:38:07.636697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.638671   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:08.639000   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:08.639000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:08.642420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:09.642718   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:09.642718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:09.645881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:10.646391   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:10.646391   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:10.649653   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:11.650077   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:11.650077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:11.653855   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:12.654508   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:12.654508   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:12.657918   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:13.658238   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:13.658238   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:13.661446   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:14.661684   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:14.661684   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:14.664655   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:15.665257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:15.665578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:15.672111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1217 00:38:16.672363   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:16.672363   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:16.675593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:17.676054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:17.676054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.679454   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:17.679454   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:17.679454   10364 type.go:168] "Request Body" body=""
	I1217 00:38:17.679454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.681452   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:18.682087   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:18.682087   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:18.685399   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:19.686028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:19.686535   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:19.689161   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:20.689948   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:20.690239   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:20.692554   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:21.693716   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:21.694009   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:21.696661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:22.697780   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:22.697780   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:22.700917   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:23.702225   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:23.702225   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:23.705612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:24.706750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:24.706750   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:24.710496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:25.710729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:25.711065   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:25.713912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:26.714178   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:26.714178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:26.718058   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:27.718245   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:27.718578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.721305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:27.721375   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:27.721441   10364 type.go:168] "Request Body" body=""
	I1217 00:38:27.721441   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.723332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:28.723805   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:28.724207   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:28.727033   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:29.727723   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:29.727723   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:29.730941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:30.731355   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:30.731355   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:30.734083   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:31.734645   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:31.734645   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:31.737932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:32.738159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:32.738159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:32.741332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:33.741889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:33.741889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:33.744576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:34.745133   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:34.745546   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:34.747888   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:35.749177   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:35.749177   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:35.751796   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:36.264530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 00:38:36.264530   10364 node_ready.go:38] duration metric: took 6m0.0004133s for node "functional-409700" to be "Ready" ...
	I1217 00:38:36.268017   10364 out.go:203] 
	W1217 00:38:36.270772   10364 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:38:36.270772   10364 out.go:285] * 
	W1217 00:38:36.272556   10364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:38:36.275101   10364 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065379308Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065401310Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065424712Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065461915Z" level=info msg="Initializing buildkit"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.183346289Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191707575Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191889990Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191902191Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191916192Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:32:32 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:32 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:32:33 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:40:46.928778   20198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:40:46.929493   20198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:40:46.933690   20198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:40:46.936220   20198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:40:46.937267   20198 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000806] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000803] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000826] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000811] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000815] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:32] CPU: 7 PID: 54557 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7f3abb92bb20
	[  +0.000446] Code: Unable to access opcode bytes at RIP 0x7f3abb92baf6.
	[  +0.000672] RSP: 002b:00007ffe2fcb88c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000804] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000788] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000852] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001011] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001269] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001111] FS:  0000000000000000 GS:  0000000000000000
	[  +0.944697] CPU: 4 PID: 54682 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000867] RIP: 0033:0x7fa9cdbc0b20
	[  +0.000408] Code: Unable to access opcode bytes at RIP 0x7fa9cdbc0af6.
	[  +0.000668] RSP: 002b:00007ffde5330df0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001045] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:40:46 up 59 min,  0 user,  load average: 0.25, 0.33, 0.55
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:40:43 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:40:44 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 989.
	Dec 17 00:40:44 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:44 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:44 functional-409700 kubelet[20036]: E1217 00:40:44.496124   20036 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:40:44 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:40:44 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:40:45 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 990.
	Dec 17 00:40:45 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:45 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:45 functional-409700 kubelet[20050]: E1217 00:40:45.242482   20050 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:40:45 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:40:45 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:40:45 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 991.
	Dec 17 00:40:45 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:45 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:46 functional-409700 kubelet[20075]: E1217 00:40:46.007415   20075 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:40:46 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:40:46 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:40:46 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 992.
	Dec 17 00:40:46 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:46 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:40:46 functional-409700 kubelet[20186]: E1217 00:40:46.748302   20186 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:40:46 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:40:46 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (608.7703ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmd (54.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (54.23s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-409700 get pods
functional_test.go:756: (dbg) Non-zero exit: out\kubectl.exe --context functional-409700 get pods: exit status 1 (50.5315683s)

                                                
                                                
** stderr ** 
	E1217 00:40:58.722632    7560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:41:08.816357    7560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:41:18.856550    7560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:41:28.899234    7560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:41:38.938794    7560 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:759: failed to run kubectl directly. args "out\\kubectl.exe --context functional-409700 get pods": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (696.7196ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.5777377s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-045600 image ls --format yaml --alsologtostderr                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ ssh            │ functional-045600 ssh pgrep buildkitd                                                                                   │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │                     │
	│ image          │ functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr                  │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls                                                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ delete         │ -p functional-045600                                                                                                    │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │ 17 Dec 25 00:23 UTC │
	│ start          │ -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-409700 --alsologtostderr -v=8                                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.1                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.3                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:latest                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add minikube-local-cache-test:functional-409700                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache delete minikube-local-cache-test:functional-409700                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl images                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ cache          │ functional-409700 cache reload                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ kubectl        │ functional-409700 kubectl -- --context functional-409700 get pods                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:32:25
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:32:25.884023   10364 out.go:360] Setting OutFile to fd 1372 ...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.926022   10364 out.go:374] Setting ErrFile to fd 1800...
	I1217 00:32:25.926022   10364 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:32:25.940016   10364 out.go:368] Setting JSON to false
	I1217 00:32:25.942016   10364 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3134,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:32:25.942016   10364 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:32:25.946016   10364 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:32:25.948015   10364 notify.go:221] Checking for updates...
	I1217 00:32:25.950019   10364 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:25.952018   10364 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:32:25.955015   10364 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:32:25.957015   10364 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:32:25.960017   10364 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:32:25.964016   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:25.964016   10364 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:32:26.171156   10364 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:32:26.176438   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.427526   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.406486235 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.434528   10364 out.go:179] * Using the docker driver based on existing profile
	I1217 00:32:26.436524   10364 start.go:309] selected driver: docker
	I1217 00:32:26.436524   10364 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.436524   10364 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:32:26.442525   10364 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:32:26.668518   10364 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:32:26.649642613 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:32:26.752324   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:26.752324   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:26.752324   10364 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:26.755825   10364 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:32:26.757701   10364 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:32:26.760609   10364 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:32:26.762036   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:26.763103   10364 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:32:26.763103   10364 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:32:26.763103   10364 cache.go:65] Caching tarball of preloaded images
	I1217 00:32:26.763399   10364 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:32:26.763399   10364 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:32:26.763399   10364 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:32:26.840670   10364 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:32:26.840729   10364 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:32:26.840729   10364 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:32:26.840729   10364 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:32:26.840729   10364 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:32:26.840729   10364 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:32:26.840729   10364 fix.go:54] fixHost starting: 
	I1217 00:32:26.848208   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:26.901821   10364 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:32:26.901821   10364 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:32:26.907276   10364 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:32:26.907373   10364 machine.go:94] provisionDockerMachine start ...
	I1217 00:32:26.910817   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:26.967003   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:26.967068   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:26.967068   10364 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:32:27.152656   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.152656   10364 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:32:27.156074   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.214234   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.214712   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.214757   10364 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:32:27.407594   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:32:27.413090   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.490102   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:27.490703   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:27.490749   10364 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:32:27.672866   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:27.672866   10364 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:32:27.672866   10364 ubuntu.go:190] setting up certificates
	I1217 00:32:27.672866   10364 provision.go:84] configureAuth start
	I1217 00:32:27.676807   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:27.732901   10364 provision.go:143] copyHostCerts
	I1217 00:32:27.733091   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:32:27.733091   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:32:27.733091   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:32:27.734330   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:32:27.734382   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:32:27.734382   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:32:27.735088   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem
	I1217 00:32:27.735088   10364 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:32:27.735088   10364 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:32:27.735728   10364 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:32:27.736339   10364 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:32:27.847670   10364 provision.go:177] copyRemoteCerts
	I1217 00:32:27.851712   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:32:27.854410   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:27.907971   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:28.027015   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I1217 00:32:28.027015   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 00:32:28.064351   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I1217 00:32:28.064351   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:32:28.092479   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I1217 00:32:28.092479   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:32:28.124650   10364 provision.go:87] duration metric: took 451.7801ms to configureAuth
	I1217 00:32:28.124650   10364 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:32:28.125238   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:28.128674   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.184894   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.185614   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.185614   10364 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:32:28.351273   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:32:28.351273   10364 ubuntu.go:71] root file system type: overlay
	I1217 00:32:28.351273   10364 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:32:28.355630   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.410840   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.411043   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.411043   10364 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:32:28.608128   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:32:28.612284   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.672356   10364 main.go:143] libmachine: Using SSH client type: native
	I1217 00:32:28.672356   10364 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff732b3fd00] 0x7ff732b42860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:32:28.672356   10364 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:32:28.839586   10364 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:32:28.839640   10364 machine.go:97] duration metric: took 1.9322227s to provisionDockerMachine
	I1217 00:32:28.839640   10364 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:32:28.839640   10364 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:32:28.845012   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:32:28.847117   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:28.904187   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.040693   10364 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:32:29.050158   10364 command_runner.go:130] > PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > NAME="Debian GNU/Linux"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_ID="12"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION="12 (bookworm)"
	I1217 00:32:29.050158   10364 command_runner.go:130] > VERSION_CODENAME=bookworm
	I1217 00:32:29.050158   10364 command_runner.go:130] > ID=debian
	I1217 00:32:29.050158   10364 command_runner.go:130] > HOME_URL="https://www.debian.org/"
	I1217 00:32:29.050158   10364 command_runner.go:130] > SUPPORT_URL="https://www.debian.org/support"
	I1217 00:32:29.050158   10364 command_runner.go:130] > BUG_REPORT_URL="https://bugs.debian.org/"
	I1217 00:32:29.050158   10364 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:32:29.050158   10364 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:32:29.050158   10364 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:32:29.050833   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:32:29.050833   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /etc/ssl/certs/41682.pem
	I1217 00:32:29.051707   10364 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:32:29.051707   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> /etc/test/nested/copy/4168/hosts
	I1217 00:32:29.055303   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:32:29.070738   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:32:29.103807   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:32:29.133625   10364 start.go:296] duration metric: took 293.9818ms for postStartSetup
	I1217 00:32:29.137970   10364 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:32:29.142249   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.194718   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.311046   10364 command_runner.go:130] > 1%
	I1217 00:32:29.316279   10364 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:32:29.324732   10364 command_runner.go:130] > 950G
	I1217 00:32:29.324732   10364 fix.go:56] duration metric: took 2.4839807s for fixHost
	I1217 00:32:29.324732   10364 start.go:83] releasing machines lock for "functional-409700", held for 2.4839807s
	I1217 00:32:29.330157   10364 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:32:29.384617   10364 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:32:29.388675   10364 ssh_runner.go:195] Run: cat /version.json
	I1217 00:32:29.388675   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.392044   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:29.442282   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.464827   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:29.558946   10364 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	W1217 00:32:29.559478   10364 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:32:29.581467   10364 command_runner.go:130] > {"iso_version": "v1.37.0-1765579389-22117", "kicbase_version": "v0.0.48-1765661130-22141", "minikube_version": "v1.37.0", "commit": "cbb33128a244032d08f8fc6e6c9f03b30f0da3e4"}
	I1217 00:32:29.585625   10364 ssh_runner.go:195] Run: systemctl --version
	I1217 00:32:29.598125   10364 command_runner.go:130] > systemd 252 (252.39-1~deb12u1)
	I1217 00:32:29.598125   10364 command_runner.go:130] > +PAM +AUDIT +SELINUX +APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT +QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified
	I1217 00:32:29.602648   10364 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1217 00:32:29.614417   10364 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W1217 00:32:29.615099   10364 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:32:29.621960   10364 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:32:29.646439   10364 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:32:29.646439   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:29.646439   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:29.646439   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:29.668226   10364 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I1217 00:32:29.672516   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:32:29.695799   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:32:29.710451   10364 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:32:29.715117   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 00:32:29.723829   10364 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:32:29.723829   10364 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:32:29.737249   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.756347   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:32:29.779698   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:32:29.801679   10364 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:32:29.825863   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:32:29.844752   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:32:29.865139   10364 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:32:29.885382   10364 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:32:29.900142   10364 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I1217 00:32:29.904180   10364 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:32:29.922078   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:30.133548   10364 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:32:30.412249   10364 start.go:496] detecting cgroup driver to use...
	I1217 00:32:30.412298   10364 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:32:30.416670   10364 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > # /lib/systemd/system/docker.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Unit]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Description=Docker Application Container Engine
	I1217 00:32:30.435945   10364 command_runner.go:130] > Documentation=https://docs.docker.com
	I1217 00:32:30.435945   10364 command_runner.go:130] > After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	I1217 00:32:30.435945   10364 command_runner.go:130] > Wants=network-online.target containerd.service
	I1217 00:32:30.435945   10364 command_runner.go:130] > Requires=docker.socket
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitBurst=3
	I1217 00:32:30.435945   10364 command_runner.go:130] > StartLimitIntervalSec=60
	I1217 00:32:30.435945   10364 command_runner.go:130] > [Service]
	I1217 00:32:30.435945   10364 command_runner.go:130] > Type=notify
	I1217 00:32:30.435945   10364 command_runner.go:130] > Restart=always
	I1217 00:32:30.435945   10364 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I1217 00:32:30.435945   10364 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I1217 00:32:30.435945   10364 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I1217 00:32:30.435945   10364 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I1217 00:32:30.435945   10364 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I1217 00:32:30.435945   10364 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I1217 00:32:30.435945   10364 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	I1217 00:32:30.435945   10364 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I1217 00:32:30.435945   10364 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNOFILE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitNPROC=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > LimitCORE=infinity
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I1217 00:32:30.435945   10364 command_runner.go:130] > # Only systemd 226 and above support this version.
	I1217 00:32:30.435945   10364 command_runner.go:130] > TasksMax=infinity
	I1217 00:32:30.437404   10364 command_runner.go:130] > TimeoutStartSec=0
	I1217 00:32:30.437404   10364 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I1217 00:32:30.437404   10364 command_runner.go:130] > Delegate=yes
	I1217 00:32:30.437404   10364 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I1217 00:32:30.437404   10364 command_runner.go:130] > KillMode=process
	I1217 00:32:30.437404   10364 command_runner.go:130] > OOMScoreAdjust=-500
	I1217 00:32:30.437404   10364 command_runner.go:130] > [Install]
	I1217 00:32:30.437404   10364 command_runner.go:130] > WantedBy=multi-user.target
	I1217 00:32:30.443833   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.468114   10364 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:32:30.542786   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:32:30.567969   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:32:30.586631   10364 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:32:30.606342   10364 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I1217 00:32:30.611878   10364 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:32:30.618659   10364 command_runner.go:130] > /usr/bin/cri-dockerd
	I1217 00:32:30.623279   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:32:30.636760   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:32:30.661689   10364 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:32:30.828747   10364 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:32:30.988536   10364 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:32:30.988536   10364 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:32:31.016800   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:32:31.041396   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:31.178126   10364 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:32:32.195651   10364 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0175164s)
	I1217 00:32:32.199801   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:32:32.224938   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:32:32.247199   10364 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:32:32.275016   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:32.297360   10364 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:32:32.448301   10364 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:32:32.597398   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.739627   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:32:32.765463   10364 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:32:32.790341   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:32.929296   10364 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:32:33.067092   10364 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:32:33.087872   10364 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:32:33.092277   10364 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I1217 00:32:33.102122   10364 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I1217 00:32:33.102122   10364 command_runner.go:130] > Device: 0,112	Inode: 1758        Links: 1
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (  997/  docker)
	I1217 00:32:33.102122   10364 command_runner.go:130] > Access: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Modify: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.102122   10364 command_runner.go:130] > Change: 2025-12-17 00:32:32.939070006 +0000
	I1217 00:32:33.103099   10364 command_runner.go:130] >  Birth: -
	I1217 00:32:33.103099   10364 start.go:564] Will wait 60s for crictl version
	I1217 00:32:33.106627   10364 ssh_runner.go:195] Run: which crictl
	I1217 00:32:33.116038   10364 command_runner.go:130] > /usr/local/bin/crictl
	I1217 00:32:33.119921   10364 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:32:33.163697   10364 command_runner.go:130] > Version:  0.1.0
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeName:  docker
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeVersion:  29.1.3
	I1217 00:32:33.163697   10364 command_runner.go:130] > RuntimeApiVersion:  v1
	I1217 00:32:33.163697   10364 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:32:33.167790   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.207644   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.212842   10364 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:32:33.256029   10364 command_runner.go:130] > 29.1.3
	I1217 00:32:33.258896   10364 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:32:33.262892   10364 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:32:33.463377   10364 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:32:33.467155   10364 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:32:33.475542   10364 command_runner.go:130] > 192.168.65.254	host.minikube.internal
	I1217 00:32:33.478907   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:33.533350   10364 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:32:33.533350   10364 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:32:33.537278   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.575248   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.575248   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.575248   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.575248   10364 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:32:33.579121   10364 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:32:33.614970   10364 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 00:32:33.615044   10364 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.13.1
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/etcd:3.6.5-0
	I1217 00:32:33.615085   10364 command_runner.go:130] > registry.k8s.io/pause:3.10.1
	I1217 00:32:33.615141   10364 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:33.615171   10364 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 00:32:33.615171   10364 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:32:33.615171   10364 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:32:33.615349   10364 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:32:33.618510   10364 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:32:34.052354   10364 command_runner.go:130] > cgroupfs
	I1217 00:32:34.052472   10364 cni.go:84] Creating CNI manager for ""
	I1217 00:32:34.052529   10364 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:32:34.052529   10364 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:32:34.052529   10364 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:32:34.052529   10364 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:32:34.056808   10364 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:32:34.073105   10364 command_runner.go:130] > kubeadm
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubectl
	I1217 00:32:34.073177   10364 command_runner.go:130] > kubelet
	I1217 00:32:34.073240   10364 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:32:34.077459   10364 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:32:34.090893   10364 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:32:34.114750   10364 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:32:34.135531   10364 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 00:32:34.159985   10364 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:32:34.168280   10364 command_runner.go:130] > 192.168.49.2	control-plane.minikube.internal
	I1217 00:32:34.172492   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:34.310890   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:34.700023   10364 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:32:34.700115   10364 certs.go:195] generating shared ca certs ...
	I1217 00:32:34.700115   10364 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:34.700485   10364 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:32:34.701055   10364 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:32:34.701055   10364 certs.go:257] generating profile certs ...
	I1217 00:32:34.701864   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:32:34.702120   10364 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:32:34.702437   10364 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:32:34.702487   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I1217 00:32:34.702646   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I1217 00:32:34.702720   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I1217 00:32:34.703540   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:32:34.703598   10364 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:32:34.703598   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:32:34.704137   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:32:34.704439   10364 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:32:34.704439   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:34.704970   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem -> /usr/share/ca-certificates/4168.pem
	I1217 00:32:34.705196   10364 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> /usr/share/ca-certificates/41682.pem
	I1217 00:32:34.706089   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:32:34.736497   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:32:34.769712   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:32:34.802984   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:32:34.830525   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:32:34.860563   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:32:34.889179   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:32:34.920536   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:32:34.947027   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:32:34.978500   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:32:35.008458   10364 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:32:35.040774   10364 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:32:35.063574   10364 ssh_runner.go:195] Run: openssl version
	I1217 00:32:35.083169   10364 command_runner.go:130] > OpenSSL 3.0.17 1 Jul 2025 (Library: OpenSSL 3.0.17 1 Jul 2025)
	I1217 00:32:35.087374   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.105491   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:32:35.130590   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.139034   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.144343   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:32:35.192130   10364 command_runner.go:130] > b5213941
	I1217 00:32:35.199882   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:32:35.220625   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.238544   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:32:35.259065   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266549   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.266638   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.271223   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:32:35.315698   10364 command_runner.go:130] > 51391683
	I1217 00:32:35.322687   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:32:35.339650   10364 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.358290   10364 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:32:35.374891   10364 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.383058   10364 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.387660   10364 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:32:35.431595   10364 command_runner.go:130] > 3ec20f2e
	I1217 00:32:35.436891   10364 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:32:35.453526   10364 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:32:35.462183   10364 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I1217 00:32:35.462183   10364 command_runner.go:130] > Device: 8,48	Inode: 15294       Links: 1
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I1217 00:32:35.462183   10364 command_runner.go:130] > Access: 2025-12-17 00:28:21.018933524 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Modify: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] > Change: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.462183   10364 command_runner.go:130] >  Birth: 2025-12-17 00:24:18.315890848 +0000
	I1217 00:32:35.466206   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:32:35.509324   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.514900   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:32:35.558615   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.563444   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:32:35.608112   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.612517   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:32:35.657914   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.662797   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:32:35.707243   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.713694   10364 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:32:35.760477   10364 command_runner.go:130] > Certificate will not expire
	I1217 00:32:35.761002   10364 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:32:35.764353   10364 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:32:35.796231   10364 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:32:35.810900   10364 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I1217 00:32:35.810946   10364 command_runner.go:130] > /var/lib/minikube/etcd:
	I1217 00:32:35.810996   10364 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:32:35.810996   10364 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:32:35.815318   10364 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:32:35.828811   10364 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:32:35.832840   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:35.889236   10364 kubeconfig.go:47] verify endpoint returned: get endpoint: "functional-409700" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.889236   10364 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "functional-409700" cluster setting kubeconfig missing "functional-409700" context setting]
	I1217 00:32:35.889236   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.906814   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.907042   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData
:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:35.908414   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 00:32:35.908474   10364 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 00:32:35.912354   10364 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:32:35.931570   10364 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 00:32:35.931672   10364 kubeadm.go:602] duration metric: took 120.6751ms to restartPrimaryControlPlane
	I1217 00:32:35.931672   10364 kubeadm.go:403] duration metric: took 170.6688ms to StartCluster
	I1217 00:32:35.931672   10364 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.931672   10364 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:35.932861   10364 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:32:35.933736   10364 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 00:32:35.933736   10364 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 00:32:35.933901   10364 addons.go:70] Setting storage-provisioner=true in profile "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:239] Setting addon storage-provisioner=true in "functional-409700"
	I1217 00:32:35.933901   10364 addons.go:70] Setting default-storageclass=true in profile "functional-409700"
	I1217 00:32:35.934051   10364 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:32:35.934098   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:35.934098   10364 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "functional-409700"
	I1217 00:32:35.936531   10364 out.go:179] * Verifying Kubernetes components...
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.942620   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:35.944620   10364 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:32:36.000654   10364 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 00:32:36.002654   10364 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.002654   10364 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 00:32:36.005647   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.010648   10364 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:32:36.011652   10364 kapi.go:59] client config for functional-409700: &rest.Config{Host:"https://127.0.0.1:56622", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAD
ata:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff734ad9080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 00:32:36.012648   10364 cert_rotation.go:141] "Starting client certificate rotation controller" logger="tls-transport-cache"
	I1217 00:32:36.012648   10364 addons.go:239] Setting addon default-storageclass=true in "functional-409700"
	I1217 00:32:36.012648   10364 host.go:66] Checking if "functional-409700" exists ...
	I1217 00:32:36.019655   10364 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:32:36.056654   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.069645   10364 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.069645   10364 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 00:32:36.072658   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.098645   10364 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:32:36.122646   10364 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:32:36.187680   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.202921   10364 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:32:36.260682   10364 node_ready.go:35] waiting up to 6m0s for node "functional-409700" to be "Ready" ...
	I1217 00:32:36.260849   10364 type.go:168] "Request Body" body=""
	I1217 00:32:36.261061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:36.264195   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:36.265260   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.336693   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.340106   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.340627   10364 retry.go:31] will retry after 202.939607ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.388976   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.393288   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.393853   10364 retry.go:31] will retry after 227.289762ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.548879   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:36.622050   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.626260   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626260   10364 retry.go:31] will retry after 395.113457ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.626489   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:36.698520   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:36.702459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:36.702459   10364 retry.go:31] will retry after 468.39049ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.026805   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.111151   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.116224   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.116762   10364 retry.go:31] will retry after 792.119284ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.177175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.249858   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.255359   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.255359   10364 retry.go:31] will retry after 596.241339ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.265542   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:37.265542   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:37.267933   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:37.856198   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:37.913554   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:37.941640   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.944331   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.944331   10364 retry.go:31] will retry after 571.98292ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.986334   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:37.989310   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:37.989310   10364 retry.go:31] will retry after 625.589854ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.268385   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:38.268385   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:38.271420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:38.521873   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:38.599872   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.599872   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.599872   10364 retry.go:31] will retry after 1.272749266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.621006   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:38.701213   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:38.701287   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:38.701287   10364 retry.go:31] will retry after 729.524766ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.272125   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:39.272125   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:39.274907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:39.436175   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:39.531183   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.531183   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.531183   10364 retry.go:31] will retry after 993.07118ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.877780   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:39.947906   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:39.950459   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:39.950459   10364 retry.go:31] will retry after 981.929326ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.275982   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:40.275982   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:40.278602   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:40.529721   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:40.604194   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:40.610090   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.610090   10364 retry.go:31] will retry after 3.313570586s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:40.937823   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:41.010101   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:41.013448   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.013448   10364 retry.go:31] will retry after 3.983327016s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:41.279217   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:41.279217   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:41.282049   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:42.282642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:42.282642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:42.285895   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.285957   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:43.285957   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:43.289436   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:43.928516   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:44.010824   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:44.016536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.016536   10364 retry.go:31] will retry after 3.387443088s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:44.290770   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:44.290770   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:44.293999   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:45.002652   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:45.076704   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:45.080905   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.080905   10364 retry.go:31] will retry after 2.289915246s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:45.294211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:45.294211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:45.297045   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:46.297784   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:46.297784   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.300989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:32:46.300989   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:46.300989   10364 type.go:168] "Request Body" body=""
	I1217 00:32:46.300989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:46.304308   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.305471   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:47.305471   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:47.308634   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:47.375936   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:47.409078   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:47.458764   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.458804   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.458804   10364 retry.go:31] will retry after 7.569688135s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.484927   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:47.488464   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:47.488464   10364 retry.go:31] will retry after 9.157991048s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:48.309180   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:48.309180   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:48.312403   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:49.312469   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:49.312469   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:49.315488   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:50.316234   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:50.316234   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:50.319889   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:51.320680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:51.320680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:51.324928   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:32:52.325755   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:52.325755   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:52.328987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:53.329277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:53.329277   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:53.332508   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:54.333122   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:54.333449   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:54.337390   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:55.034235   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:32:55.110067   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:55.114541   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.114568   10364 retry.go:31] will retry after 11.854567632s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:55.338017   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:55.338017   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:55.341093   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:56.341403   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:56.341403   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.344366   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:32:56.344366   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:32:56.344366   10364 type.go:168] "Request Body" body=""
	I1217 00:32:56.344898   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:56.347007   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:32:56.652443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:32:56.739536   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:32:56.739536   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:56.739536   10364 retry.go:31] will retry after 10.780280137s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:32:57.347379   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:57.347379   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:57.350807   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:58.351069   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:58.351069   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:58.354096   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:32:59.354451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:32:59.354451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:32:59.357775   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:00.357853   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:00.357853   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:00.362050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:01.362288   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:01.362722   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:01.365594   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:02.365849   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:02.366254   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:02.369208   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:03.369619   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:03.369619   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:03.373087   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:04.373596   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:04.373596   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:04.376267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:05.376901   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:05.376901   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:05.380341   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:06.380779   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:06.380779   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.384486   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:06.384486   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:06.384486   10364 type.go:168] "Request Body" body=""
	I1217 00:33:06.384486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:06.386883   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:06.975138   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:07.047365   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.053212   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.053212   10364 retry.go:31] will retry after 9.4400792s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.388016   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:07.388016   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:07.391682   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:07.525003   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:07.600422   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:07.604097   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:07.604097   10364 retry.go:31] will retry after 21.608180779s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:08.392667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:08.392667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:08.395310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:09.395626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:09.395626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:09.400417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:33:10.400757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:10.400757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:10.403934   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:11.404855   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:11.404855   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:11.407439   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:12.407525   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:12.407525   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:12.410864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:13.411229   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:13.411229   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:13.414667   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:14.414815   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:14.414815   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:14.417914   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:15.418400   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:15.418400   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:15.421658   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:16.421803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:16.421803   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.424468   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:16.424468   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:16.425000   10364 type.go:168] "Request Body" body=""
	I1217 00:33:16.425000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:16.427532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:16.499443   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:16.577484   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:16.582973   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:16.583014   10364 retry.go:31] will retry after 31.220452725s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:17.427856   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:17.427856   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:17.430661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:18.431189   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:18.431189   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:18.434303   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:19.434667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:19.434667   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:19.437774   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:20.438018   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:20.438018   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:20.441284   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:21.442005   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:21.442005   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:21.445477   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:22.446517   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:22.446517   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:22.451991   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:33:23.452224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:23.452224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:23.455297   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:24.455662   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:24.455662   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:24.458123   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:25.458634   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:25.458634   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:25.461576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:26.462089   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:26.462563   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.465489   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:33:26.465489   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:26.465647   10364 type.go:168] "Request Body" body=""
	I1217 00:33:26.465647   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:26.468381   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:27.469289   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:27.469617   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:27.472277   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:28.472725   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:28.473201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:28.476219   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:29.218035   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:29.290496   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:29.295368   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.295368   10364 retry.go:31] will retry after 28.200848873s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:29.476644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:29.476644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:29.479582   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:30.480382   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:30.480382   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:30.483362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:31.484451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:31.484451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:31.488344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:32.488579   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:32.488579   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:32.491919   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:33.492204   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:33.492204   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:33.494785   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:34.495401   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:34.495401   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:34.499412   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:35.499565   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:35.500315   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:35.503299   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:36.504300   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:36.504300   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.507870   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:36.507973   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:36.508033   10364 type.go:168] "Request Body" body=""
	I1217 00:33:36.508113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:36.510973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:37.511257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:37.511257   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:37.514688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:38.514936   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:38.514936   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:38.518386   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:39.518923   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:39.518923   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:39.520922   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:33:40.521680   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:40.521680   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:40.524367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:41.525837   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:41.526267   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:41.528903   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:42.529201   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:42.529201   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:42.531842   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:43.532127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:43.532127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:43.534820   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:44.536381   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:44.536381   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:44.539631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:45.540548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:45.540548   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:45.543978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:46.544552   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:46.544552   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.547995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:46.547995   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:46.547995   10364 type.go:168] "Request Body" body=""
	I1217 00:33:46.547995   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:46.550843   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:47.551203   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:47.551203   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:47.554480   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:47.809190   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:33:47.891444   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:47.895455   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:47.895455   10364 retry.go:31] will retry after 48.235338214s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:48.554744   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:48.554744   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:48.557563   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:49.558144   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:49.558144   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:49.560984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:50.561573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:50.561999   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:50.564681   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:51.564893   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:51.565218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:51.567822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:52.568697   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:52.568697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:52.572043   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:53.572367   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:53.572367   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:53.575543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:54.576655   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:54.576655   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:54.579628   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:55.580688   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:55.580688   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:55.583829   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:33:56.585061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:56.585061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.589344   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:33:56.589344   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:33:56.589879   10364 type.go:168] "Request Body" body=""
	I1217 00:33:56.589987   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:56.592329   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:57.501146   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:33:57.569298   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:33:57.571601   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.571601   10364 retry.go:31] will retry after 30.590824936s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 00:33:57.593179   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:57.593179   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:57.595184   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:58.596116   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:58.596302   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:58.598982   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:33:59.599603   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:33:59.599603   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:33:59.602661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:00.602875   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:00.603290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:00.606460   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:01.607309   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:01.607677   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:01.609972   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:02.611301   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:02.611301   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:02.614599   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:03.614800   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:03.614800   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:03.618177   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:04.618602   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:04.618996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:04.624198   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:05.625646   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:05.625646   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:05.629762   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:06.630421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:06.630421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.633232   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:06.633232   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:06.633809   10364 type.go:168] "Request Body" body=""
	I1217 00:34:06.633809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:06.638868   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:07.639683   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:07.639683   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:07.643176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:08.643409   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:08.643409   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:08.646509   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:09.647445   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:09.647445   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:09.650342   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:10.650843   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:10.651408   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:10.653984   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:11.654782   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:11.654782   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:11.660510   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:12.661264   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:12.661264   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:12.664725   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:13.665643   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:13.665643   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:13.668534   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:14.669351   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:14.669351   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:14.673188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:15.673306   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:15.673709   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:15.675803   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:16.676778   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:16.676778   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.679773   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:16.679872   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:16.679999   10364 type.go:168] "Request Body" body=""
	I1217 00:34:16.680102   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:16.682768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:17.683817   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:17.683817   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:17.686822   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:18.687027   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:18.687027   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:18.690241   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:19.690694   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:19.690694   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:19.693877   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:20.694298   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:20.694605   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:20.697314   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:21.697742   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:21.697742   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:21.700603   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:22.701210   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:22.701210   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:22.704640   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:23.705172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:23.705172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:23.707560   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:24.708954   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:24.708954   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:24.712011   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:25.712539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:25.712539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:25.717818   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:26.717996   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:26.717996   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.721620   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:26.721620   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:26.721620   10364 type.go:168] "Request Body" body=""
	I1217 00:34:26.721620   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:26.725519   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:27.726686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:27.726686   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:27.729112   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:28.168229   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 00:34:28.439129   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439129   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:28.439671   10364 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:28.730022   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:28.730022   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:28.732579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:29.733316   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:29.733316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:29.737180   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:30.737898   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:30.738218   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:30.740633   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:31.741637   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:31.741637   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:31.744968   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:32.745244   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:32.745244   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:32.748688   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:33.749681   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:33.749681   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:33.753864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:34.754458   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:34.754458   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:34.757550   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:35.757989   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:35.757989   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:35.762318   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:36.136043   10364 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 00:34:36.218441   10364 command_runner.go:130] ! error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 00:34:36.224593   10364 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8441/openapi/v2?timeout=32s": dial tcp [::1]:8441: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 00:34:36.231181   10364 out.go:179] * Enabled addons: 
	I1217 00:34:36.235148   10364 addons.go:530] duration metric: took 2m0.3003648s for enable addons: enabled=[]
	I1217 00:34:36.762736   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:36.762736   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.765107   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:36.765107   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:36.765107   10364 type.go:168] "Request Body" body=""
	I1217 00:34:36.765638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:36.768239   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:37.768638   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:37.768638   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:37.772263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:38.772833   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:38.772833   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:38.775690   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:39.776860   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:39.776860   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:39.779543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:40.779907   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:40.779907   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:40.782631   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:41.783358   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:41.783809   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:41.787117   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:42.787421   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:42.787421   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:42.790478   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:43.791393   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:43.791393   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:43.794768   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:44.795719   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:44.795719   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:44.799050   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:45.799750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:45.800118   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:45.802333   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:46.802808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:46.802808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.806272   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:34:46.806272   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:46.806272   10364 type.go:168] "Request Body" body=""
	I1217 00:34:46.806272   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:46.808808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:47.809106   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:47.809106   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:47.812072   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:48.812377   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:48.812377   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:48.815804   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:49.816160   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:49.816160   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:49.819073   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:50.819687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:50.819687   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:50.824808   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:34:51.825256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:51.825256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:51.827149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:34:52.828172   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:52.828172   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:52.831194   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:53.831502   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:53.831502   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:53.835949   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:54.836430   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:54.836430   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:54.840704   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:34:55.840945   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:55.840945   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:55.844273   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:56.844698   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:56.844774   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.847718   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:34:56.847718   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:34:56.847718   10364 type.go:168] "Request Body" body=""
	I1217 00:34:56.847718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:56.850361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:34:57.850724   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:57.850724   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:57.853992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:58.854839   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:58.854839   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:58.857985   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:34:59.858686   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:34:59.859048   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:34:59.863493   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:00.863731   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:00.863731   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:00.867009   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:01.867548   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:01.867986   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:01.870485   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:02.870682   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:02.870682   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:02.874134   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:03.874927   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:03.874927   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:03.877992   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:04.878757   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:04.878757   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:04.882012   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:05.882985   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:05.882985   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:05.886320   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:06.887395   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:06.887395   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.890772   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:06.890844   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:06.890844   10364 type.go:168] "Request Body" body=""
	I1217 00:35:06.890844   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:06.892912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:07.893541   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:07.893541   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:07.897243   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:08.897423   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:08.897423   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:08.901955   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:09.902222   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:09.902222   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:09.905347   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:10.906346   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:10.906346   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:10.909589   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:11.910013   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:11.910424   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:11.913496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:12.913792   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:12.913792   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:12.917334   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:13.917794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:13.917794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:13.920911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:14.921451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:14.921902   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:14.924686   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:15.925539   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:15.925539   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:15.928618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:16.928871   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:16.928871   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.932364   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:16.932364   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:16.932364   10364 type.go:168] "Request Body" body=""
	I1217 00:35:16.932364   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:16.935267   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:17.936075   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:17.936075   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:17.939252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:18.940390   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:18.940390   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:18.943332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:19.943802   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:19.943802   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:19.946902   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:20.947509   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:20.947882   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:20.949988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:21.950644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:21.950644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:21.954065   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:22.954236   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:22.954236   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:22.958266   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:23.958794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:23.959062   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:23.961451   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:24.962012   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:24.962012   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:24.965125   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:25.965439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:25.965439   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:25.968637   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:26.968810   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:26.968810   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.971892   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:26.971961   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:26.972008   10364 type.go:168] "Request Body" body=""
	I1217 00:35:26.972008   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:26.977052   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:35:27.977730   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:27.977730   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:27.980941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:28.981406   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:28.981406   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:28.984099   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:29.985140   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:29.985452   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:29.988385   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:30.989318   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:30.989318   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:30.992251   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:31.993148   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:31.993515   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:31.996483   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:32.996803   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:32.997153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:32.999821   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:33.999930   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:33.999930   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:34.003148   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:35.003410   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:35.003410   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:35.006455   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:36.008349   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:36.008349   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:36.010952   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:37.011100   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:37.011100   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.014149   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:37.014149   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:37.014149   10364 type.go:168] "Request Body" body=""
	I1217 00:35:37.014678   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:37.016502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:35:38.017464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:38.017464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:38.020305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:39.020641   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:39.020641   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:39.023532   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:40.024042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:40.024042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:40.027707   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:41.027942   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:41.027942   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:41.031346   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:42.032292   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:42.032292   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:42.035463   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:43.035799   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:43.036298   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:43.039139   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:44.039453   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:44.039453   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:44.042907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:45.043589   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:45.043589   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:45.046766   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:46.047648   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:46.047648   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:46.051224   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:47.051642   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:47.051642   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.054716   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:35:47.054716   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:47.054716   10364 type.go:168] "Request Body" body=""
	I1217 00:35:47.054716   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:47.056987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:48.058345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:48.058345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:48.061555   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:49.061851   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:49.061851   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:49.065062   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:50.065656   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:50.065933   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:50.068127   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:51.068865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:51.069263   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:51.071479   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:52.072199   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:52.072199   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:52.075414   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:53.076211   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:53.076211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:53.079310   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:54.079644   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:54.079644   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:54.083395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:35:55.083663   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:55.083663   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:55.086632   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:56.087097   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:56.087494   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:56.091591   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:35:57.091913   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:57.092314   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.095048   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:35:57.095048   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:35:57.095048   10364 type.go:168] "Request Body" body=""
	I1217 00:35:57.095640   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:57.098264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:58.098993   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:58.098993   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:58.101747   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:35:59.103113   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:35:59.103113   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:35:59.105884   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:00.107028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:00.107028   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:00.109881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:01.110650   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:01.110650   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:01.114650   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:02.114915   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:02.114915   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:02.118186   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:03.118580   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:03.118580   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:03.121988   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:04.123025   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:04.123025   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:04.126587   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:05.127042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:05.127451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:05.132256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:06.132687   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:06.133104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:06.135375   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:07.137054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:07.137054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.140223   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:07.140223   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:07.140223   10364 type.go:168] "Request Body" body=""
	I1217 00:36:07.140223   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:07.142965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:08.143629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:08.143629   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:08.147215   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:09.147522   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:09.147522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:09.150564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:10.151061   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:10.151061   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:10.153608   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:11.154626   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:11.154626   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:11.157406   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:12.158277   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:12.158752   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:12.162911   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:13.163269   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:13.163269   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:13.166264   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:14.166990   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:14.166990   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:14.171561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:15.171912   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:15.171912   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:15.175056   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:16.176256   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:16.176256   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:16.179133   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:17.179808   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:17.179808   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.182925   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:17.182976   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:17.183085   10364 type.go:168] "Request Body" body=""
	I1217 00:36:17.183154   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:17.186098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:18.186373   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:18.186373   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:18.188978   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:19.189978   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:19.189978   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:19.193521   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:20.193758   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:20.194053   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:20.196502   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:21.196916   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:21.196916   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:21.200034   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:22.200545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:22.200545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:22.204008   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:23.205276   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:23.205569   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:23.207867   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:24.208451   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:24.208451   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:24.211642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:25.212042   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:25.212042   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:25.214973   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:26.215279   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:26.215279   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:26.218537   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:27.219034   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:27.219034   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.221530   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:27.221530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:27.222255   10364 type.go:168] "Request Body" body=""
	I1217 00:36:27.222319   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:27.225150   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:28.225829   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:28.225829   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:28.229281   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:29.229629   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:29.229922   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:29.232417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:30.233433   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:30.233433   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:30.236676   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:31.237185   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:31.237185   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:31.240270   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:32.240968   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:32.241316   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:32.244151   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:33.244415   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:33.244415   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:33.248305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:34.248592   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:34.248592   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:34.252121   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:35.252241   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:35.252241   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:35.254173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:36.254586   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:36.254586   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:36.257572   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:37.258337   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:37.258337   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.261475   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:36:37.261475   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:37.262206   10364 type.go:168] "Request Body" body=""
	I1217 00:36:37.262532   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:37.264961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:38.265631   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:38.265854   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:38.268561   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:39.269290   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:39.269290   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:39.271879   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:40.272273   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:40.272273   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:40.275242   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:41.276205   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:41.276623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:41.278866   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:42.279206   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:42.279206   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:42.282173   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:43.282751   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:43.282751   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:43.285875   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:44.286756   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:44.287077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:44.289831   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:45.290159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:45.290159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:45.293298   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:46.294545   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:46.294545   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:46.297578   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:47.297935   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:47.297935   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.300692   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:47.300692   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:47.300692   10364 type.go:168] "Request Body" body=""
	I1217 00:36:47.300692   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:47.302635   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:36:48.303208   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:48.303208   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:48.306418   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:49.306667   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:49.307130   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:49.309815   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:50.310768   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:50.310768   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:50.313618   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:51.314224   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:51.314224   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:51.316809   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:52.317523   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:52.317523   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:52.322067   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:53.322359   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:53.322359   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:53.325176   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:54.325549   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:54.325549   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:54.328395   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:55.328984   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:55.329339   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:55.334171   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:36:56.334464   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:56.334464   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:56.337612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:57.337960   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:57.337960   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.340932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:36:57.341462   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:36:57.341593   10364 type.go:168] "Request Body" body=""
	I1217 00:36:57.341654   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:57.344564   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:36:58.345573   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:58.345573   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:58.348987   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:36:59.349186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:36:59.349186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:36:59.352680   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:00.353127   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:00.353127   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:00.355791   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:01.356152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:01.356152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:01.360722   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:02.361585   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:02.362214   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:02.364765   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:03.365485   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:03.365485   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:03.368349   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:04.368821   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:04.368821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:04.371965   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:05.372332   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:05.372332   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:05.375376   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:06.376031   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:06.376031   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:06.378850   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:07.380334   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:07.380334   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.383178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:07.383178   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:07.383178   10364 type.go:168] "Request Body" body=""
	I1217 00:37:07.383178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:07.386449   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:08.387594   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:08.388059   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:08.391028   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:09.391186   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:09.391186   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:09.394448   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:10.394971   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:10.394971   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:10.399668   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:11.400389   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:11.400389   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:11.403573   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:12.404531   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:12.404531   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:12.407846   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:13.408153   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:13.408153   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:13.411907   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:14.412175   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:14.412175   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:14.415697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:15.416228   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:15.416228   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:15.419897   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:16.420794   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:16.420794   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:16.424642   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:17.424997   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:17.424997   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.428835   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:17.428983   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:17.428983   10364 type.go:168] "Request Body" body=""
	I1217 00:37:17.428983   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:17.432188   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:18.433366   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:18.433366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:18.437105   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:19.437417   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:19.437866   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:19.443541   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	I1217 00:37:20.444729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:20.444729   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:20.447421   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:21.447798   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:21.447798   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:21.450995   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:22.451672   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:22.451672   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:22.454367   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:23.455345   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:23.455345   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:23.458961   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:24.459152   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:24.459152   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:24.462362   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:25.462863   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:25.462863   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:25.465098   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:26.465439   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:26.465821   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:26.468832   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:27.469064   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:27.469454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.472358   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:27.472422   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:27.472536   10364 type.go:168] "Request Body" body=""
	I1217 00:37:27.472615   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:27.475175   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:28.475953   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:28.475953   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:28.479074   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:29.479701   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:29.479701   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:29.482529   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:30.483219   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:30.483219   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:30.486254   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:31.487104   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:31.487104   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:31.489733   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:32.490240   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:32.490767   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:32.493579   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:33.493807   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:33.494211   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:33.497178   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:34.497955   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:34.497955   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:34.501263   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:35.501483   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:35.501483   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:35.504417   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:36.504622   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:36.504622   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:36.508593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:37.509653   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:37.509653   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.512288   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:37:37.512288   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:37.512424   10364 type.go:168] "Request Body" body=""
	I1217 00:37:37.512522   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:37.514595   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:38.514845   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:38.514845   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:38.517717   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:39.518411   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:39.518411   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:39.520864   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:40.521889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:40.521889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:40.525103   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:41.525419   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:41.525419   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:41.528361   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:42.528733   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:42.529149   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:42.532111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:43.532896   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:43.532896   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:43.536252   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:44.536867   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:44.536867   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:44.540157   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:45.540486   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:45.540486   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:45.543711   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:46.543879   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:46.543879   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:46.546377   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:47.546832   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:47.546832   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.550543   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:37:47.550543   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:47.550643   10364 type.go:168] "Request Body" body=""
	I1217 00:37:47.550786   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:47.552960   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:48.553202   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:48.553202   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:48.558015   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:49.559371   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:49.559371   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:49.562548   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:50.562966   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:50.562966   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:50.565800   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:51.566293   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:51.566623   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:51.569597   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:52.570511   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:52.570511   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:52.573392   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:53.573965   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:53.573965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:53.576340   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:54.577062   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:54.577463   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:54.579836   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:55.580473   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:55.580473   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:55.583734   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:56.584454   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:56.584454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:56.587256   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:37:57.588397   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:57.588397   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.593527   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=5
	W1217 00:37:57.593527   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:37:57.593527   10364 type.go:168] "Request Body" body=""
	I1217 00:37:57.593527   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:57.597825   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:37:58.598550   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:58.598550   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:58.602122   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:37:59.602444   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:37:59.602444   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:37:59.605501   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:00.606096   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:00.606096   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:00.608989   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:01.609865   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:01.609965   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:01.613038   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:02.613818   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:02.614067   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:02.617196   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:03.617950   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:03.618366   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:03.621156   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:04.621587   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:04.621587   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:04.624616   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:05.625123   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:05.625123   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:05.627780   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:06.628169   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:06.628602   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:06.632684   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=4
	I1217 00:38:07.633450   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:07.633450   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.636697   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:07.636697   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:07.636697   10364 type.go:168] "Request Body" body=""
	I1217 00:38:07.636697   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:07.638671   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:08.639000   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:08.639000   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:08.642420   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:09.642718   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:09.642718   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:09.645881   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:10.646391   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:10.646391   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:10.649653   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:11.650077   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:11.650077   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:11.653855   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:12.654508   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:12.654508   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:12.657918   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:13.658238   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:13.658238   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:13.661446   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:14.661684   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:14.661684   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:14.664655   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:15.665257   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:15.665578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:15.672111   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=6
	I1217 00:38:16.672363   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:16.672363   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:16.675593   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:17.676054   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:17.676054   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.679454   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	W1217 00:38:17.679454   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:17.679454   10364 type.go:168] "Request Body" body=""
	I1217 00:38:17.679454   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:17.681452   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:18.682087   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:18.682087   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:18.685399   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:19.686028   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:19.686535   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:19.689161   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:20.689948   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:20.690239   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:20.692554   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:21.693716   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:21.694009   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:21.696661   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:22.697780   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:22.697780   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:22.700917   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:23.702225   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:23.702225   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:23.705612   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:24.706750   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:24.706750   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:24.710496   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:25.710729   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:25.711065   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:25.713912   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:26.714178   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=9 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:26.714178   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:26.718058   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:27.718245   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=10 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:27.718578   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.721305   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:27.721375   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): Get "https://127.0.0.1:56622/api/v1/nodes/functional-409700": EOF
	I1217 00:38:27.721441   10364 type.go:168] "Request Body" body=""
	I1217 00:38:27.721441   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:27.723332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=1
	I1217 00:38:28.723805   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=1 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:28.724207   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:28.727033   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:29.727723   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=2 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:29.727723   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:29.730941   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:30.731355   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=3 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:30.731355   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:30.734083   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:31.734645   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=4 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:31.734645   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:31.737932   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:32.738159   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=5 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:32.738159   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:32.741332   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=3
	I1217 00:38:33.741889   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=6 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:33.741889   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:33.744576   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:34.745133   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=7 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:34.745546   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:34.747888   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	I1217 00:38:35.749177   10364 with_retry.go:234] "Got a Retry-After response" delay="1s" attempt=8 url="https://127.0.0.1:56622/api/v1/nodes/functional-409700"
	I1217 00:38:35.749177   10364 round_trippers.go:527] "Request" verb="GET" url="https://127.0.0.1:56622/api/v1/nodes/functional-409700" headers=<
		Accept: application/vnd.kubernetes.protobuf,application/json
		User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	 >
	I1217 00:38:35.751796   10364 round_trippers.go:632] "Response" status="" headers="" milliseconds=2
	W1217 00:38:36.264530   10364 node_ready.go:55] error getting node "functional-409700" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 00:38:36.264530   10364 node_ready.go:38] duration metric: took 6m0.0004133s for node "functional-409700" to be "Ready" ...
	I1217 00:38:36.268017   10364 out.go:203] 
	W1217 00:38:36.270772   10364 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 00:38:36.270772   10364 out.go:285] * 
	W1217 00:38:36.272556   10364 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:38:36.275101   10364 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065379308Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065401310Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065424712Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.065461915Z" level=info msg="Initializing buildkit"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.183346289Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191707575Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191889990Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191902191Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:32:32 functional-409700 dockerd[10537]: time="2025-12-17T00:32:32.191916192Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:32:32 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:32 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:32:32 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:32:32 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:32:33 functional-409700 cri-dockerd[10854]: time="2025-12-17T00:32:33Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:32:33 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:41:41.161536   21218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:41:41.162654   21218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:41:41.164117   21218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:41:41.167972   21218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:41:41.169976   21218 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000806] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000803] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000826] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000811] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000815] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:32] CPU: 7 PID: 54557 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000816] RIP: 0033:0x7f3abb92bb20
	[  +0.000446] Code: Unable to access opcode bytes at RIP 0x7f3abb92baf6.
	[  +0.000672] RSP: 002b:00007ffe2fcb88c0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000804] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000788] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000852] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001011] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001269] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001111] FS:  0000000000000000 GS:  0000000000000000
	[  +0.944697] CPU: 4 PID: 54682 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000867] RIP: 0033:0x7fa9cdbc0b20
	[  +0.000408] Code: Unable to access opcode bytes at RIP 0x7fa9cdbc0af6.
	[  +0.000668] RSP: 002b:00007ffde5330df0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.001045] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:41:41 up  1:00,  0 user,  load average: 0.56, 0.38, 0.55
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:41:37 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:41:38 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1061.
	Dec 17 00:41:38 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:38 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:38 functional-409700 kubelet[21059]: E1217 00:41:38.739204   21059 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:41:38 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:41:38 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:41:39 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1062.
	Dec 17 00:41:39 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:39 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:39 functional-409700 kubelet[21071]: E1217 00:41:39.491097   21071 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:41:39 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:41:39 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:41:40 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1063.
	Dec 17 00:41:40 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:40 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:40 functional-409700 kubelet[21099]: E1217 00:41:40.248437   21099 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:41:40 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:41:40 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:41:40 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1064.
	Dec 17 00:41:40 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:40 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:41:40 functional-409700 kubelet[21202]: E1217 00:41:40.993071   21202 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:41:40 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:41:40 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (591.7881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/MinikubeKubectlCmdDirectly (54.23s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (741.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-409700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1217 00:43:14.105617    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:44:37.176148    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:45:33.700270    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:48:14.108495    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:48:36.774742    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:50:33.702449    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:53:14.112073    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-409700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: exit status 109 (12m17.6703373s)

                                                
                                                
-- stdout --
	* [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	  - apiserver.enable-admission-plugins=NamespaceAutoProvision
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001130665s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Related issue: https://github.com/kubernetes/minikube/issues/4172

                                                
                                                
** /stderr **
functional_test.go:774: failed to restart minikube. args "out/minikube-windows-amd64.exe start -p functional-409700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all": exit status 109
functional_test.go:776: restart took 12m17.6793233s for "functional-409700" cluster.
I1217 00:54:00.357696    4168 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (617.2965ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.3806009s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-045600 ssh pgrep buildkitd                                                                                   │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │                     │
	│ image          │ functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr                  │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls                                                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ delete         │ -p functional-045600                                                                                                    │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │ 17 Dec 25 00:23 UTC │
	│ start          │ -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-409700 --alsologtostderr -v=8                                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.1                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.3                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:latest                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add minikube-local-cache-test:functional-409700                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache delete minikube-local-cache-test:functional-409700                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl images                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ cache          │ functional-409700 cache reload                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ kubectl        │ functional-409700 kubectl -- --context functional-409700 get pods                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ start          │ -p functional-409700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:41:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:41:42.742737    7944 out.go:360] Setting OutFile to fd 1692 ...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.785452    7944 out.go:374] Setting ErrFile to fd 2032...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.823093    7944 out.go:368] Setting JSON to false
	I1217 00:41:42.826928    7944 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3691,"bootTime":1765928411,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:41:42.827062    7944 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:41:42.832423    7944 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:41:42.834008    7944 notify.go:221] Checking for updates...
	I1217 00:41:42.836028    7944 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:41:42.837747    7944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:41:42.839400    7944 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:41:42.841743    7944 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:41:42.843853    7944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:41:42.846824    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:42.847138    7944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:41:43.032802    7944 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:41:43.036200    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.287623    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.26443223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.290624    7944 out.go:179] * Using the docker driver based on existing profile
	I1217 00:41:43.295624    7944 start.go:309] selected driver: docker
	I1217 00:41:43.295624    7944 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.295624    7944 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:41:43.302622    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.528811    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.511883839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.567003    7944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:41:43.567003    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:43.567003    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:43.567003    7944 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.571110    7944 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:41:43.575004    7944 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:41:43.577924    7944 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:41:43.581930    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:43.581930    7944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:41:43.581930    7944 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:41:43.581930    7944 cache.go:65] Caching tarball of preloaded images
	I1217 00:41:43.582517    7944 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:41:43.582517    7944 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:41:43.582517    7944 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:41:43.660928    7944 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:41:43.660928    7944 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:41:43.660928    7944 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:41:43.660928    7944 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:41:43.660928    7944 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:41:43.660928    7944 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:41:43.660928    7944 fix.go:54] fixHost starting: 
	I1217 00:41:43.667914    7944 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:41:43.723914    7944 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:41:43.723914    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:41:43.726919    7944 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:41:43.726919    7944 machine.go:94] provisionDockerMachine start ...
	I1217 00:41:43.731914    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:43.796916    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:43.796916    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:43.796916    7944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:41:43.969131    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:43.969131    7944 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:41:43.975058    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.033428    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.033980    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.033980    7944 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:41:44.218389    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:44.221624    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.281826    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.282333    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.282333    7944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:41:44.449024    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:44.449024    7944 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:41:44.449024    7944 ubuntu.go:190] setting up certificates
	I1217 00:41:44.449024    7944 provision.go:84] configureAuth start
	I1217 00:41:44.452071    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:44.516121    7944 provision.go:143] copyHostCerts
	I1217 00:41:44.516430    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:41:44.516430    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:41:44.516430    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:41:44.517399    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:41:44.517399    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:41:44.517399    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:41:44.518364    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:41:44.518364    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:41:44.518364    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:41:44.519103    7944 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:41:44.613354    7944 provision.go:177] copyRemoteCerts
	I1217 00:41:44.617354    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:41:44.620354    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.676405    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:44.805633    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:41:44.840310    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:41:44.872497    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:41:44.899304    7944 provision.go:87] duration metric: took 450.2424ms to configureAuth
	I1217 00:41:44.899304    7944 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:41:44.899304    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:44.902693    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.962192    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.962661    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.962688    7944 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:41:45.129265    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:41:45.129265    7944 ubuntu.go:71] root file system type: overlay
	I1217 00:41:45.129265    7944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:41:45.133980    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.191141    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.191583    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.191676    7944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:41:45.381081    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:41:45.384910    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.439634    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.439634    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.439634    7944 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:41:45.639837    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:45.639837    7944 machine.go:97] duration metric: took 1.9128981s to provisionDockerMachine
	I1217 00:41:45.639837    7944 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:41:45.639837    7944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:41:45.643968    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:41:45.647579    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.702256    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:45.830302    7944 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:41:45.840912    7944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:41:45.840912    7944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:41:45.841469    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:41:45.842433    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:41:45.846605    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:41:45.861850    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:41:45.894051    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:41:45.924540    7944 start.go:296] duration metric: took 284.7004ms for postStartSetup
	I1217 00:41:45.929030    7944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:41:45.931390    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.988238    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.118181    7944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:41:46.128256    7944 fix.go:56] duration metric: took 2.4673029s for fixHost
	I1217 00:41:46.128336    7944 start.go:83] releasing machines lock for "functional-409700", held for 2.4673029s
	I1217 00:41:46.132380    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:46.192243    7944 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:41:46.196238    7944 ssh_runner.go:195] Run: cat /version.json
	I1217 00:41:46.196238    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.199443    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.250894    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.252723    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.374927    7944 ssh_runner.go:195] Run: systemctl --version
	W1217 00:41:46.375040    7944 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:41:46.393243    7944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:41:46.405015    7944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:41:46.411122    7944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:41:46.427748    7944 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:41:46.427748    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:46.427748    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:46.428359    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:46.459279    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:41:46.481169    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:41:46.495981    7944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:41:46.501301    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:41:46.522269    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:41:46.543007    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:41:46.564748    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 00:41:46.571173    7944 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:41:46.571173    7944 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:41:46.587140    7944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:41:46.608125    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:41:46.628561    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:41:46.651071    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:41:46.670567    7944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:41:46.691876    7944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:41:46.708884    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:46.907593    7944 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:41:47.157536    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:47.157588    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:47.161701    7944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:41:47.187508    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.211591    7944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:41:47.291331    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.315837    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:41:47.336371    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:47.365154    7944 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:41:47.376814    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:41:47.391947    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:41:47.416863    7944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:41:47.573803    7944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:41:47.742508    7944 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:41:47.742508    7944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:41:47.769569    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:41:47.792419    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:47.926195    7944 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:41:48.924753    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:41:48.948387    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:41:48.972423    7944 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:41:49.001034    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.024808    7944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:41:49.170637    7944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:41:49.341524    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.489502    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:41:49.515161    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:41:49.538565    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.678445    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:41:49.792662    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.810919    7944 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:41:49.817201    7944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:41:49.824745    7944 start.go:564] Will wait 60s for crictl version
	I1217 00:41:49.829680    7944 ssh_runner.go:195] Run: which crictl
	I1217 00:41:49.841215    7944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:41:49.886490    7944 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:41:49.890545    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.932656    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.973421    7944 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:41:49.976704    7944 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:41:50.163467    7944 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:41:50.168979    7944 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:41:50.182632    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:50.243980    7944 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:41:50.246233    7944 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:41:50.246321    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:50.249328    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.284688    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.284688    7944 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:41:50.288341    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.318208    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.318208    7944 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:41:50.318208    7944 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:41:50.318208    7944 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:41:50.322786    7944 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:41:50.580992    7944 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:41:50.580992    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:50.580992    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:50.580992    7944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:41:50.580992    7944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:41:50.581552    7944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:41:50.586113    7944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:41:50.602747    7944 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:41:50.606600    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:41:50.618442    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:41:50.639202    7944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:41:50.660303    7944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1217 00:41:50.686181    7944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:41:50.699393    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:50.841016    7944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:41:50.909095    7944 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:41:50.909095    7944 certs.go:195] generating shared ca certs ...
	I1217 00:41:50.909181    7944 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:41:50.909751    7944 certs.go:257] generating profile certs ...
	I1217 00:41:50.911054    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:41:50.911486    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:41:50.911858    7944 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:41:50.913273    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:41:50.913634    7944 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:41:50.913687    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:41:50.913976    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:41:50.914271    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:41:50.914593    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:41:50.915068    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:41:50.916395    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:41:50.945779    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:41:50.974173    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:41:51.006494    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:41:51.039634    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:41:51.069500    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:41:51.095965    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:41:51.124108    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:41:51.153111    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:41:51.181612    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:41:51.209244    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:41:51.236994    7944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:41:51.261730    7944 ssh_runner.go:195] Run: openssl version
	I1217 00:41:51.280852    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.301978    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:41:51.322912    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.331873    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.336845    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.388885    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:41:51.407531    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.426119    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:41:51.446689    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.455113    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.459541    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.507465    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:41:51.525452    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.543170    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:41:51.560439    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.566853    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.571342    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.621647    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:41:51.639899    7944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:41:51.651440    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:41:51.702199    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:41:51.752106    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:41:51.800819    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:41:51.851441    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:41:51.900439    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:41:51.944312    7944 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:51.948688    7944 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:51.985002    7944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:41:51.998839    7944 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:41:51.998925    7944 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:41:52.003287    7944 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:41:52.016206    7944 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.019955    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:52.077101    7944 kubeconfig.go:125] found "functional-409700" server: "https://127.0.0.1:56622"
	I1217 00:41:52.084213    7944 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:41:52.100216    7944 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:24:17.645837868 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:41:50.679316242 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:41:52.100258    7944 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:41:52.104145    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:52.137767    7944 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:41:52.163943    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:41:52.178186    7944 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:28 /etc/kubernetes/scheduler.conf
	
	I1217 00:41:52.182824    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:41:52.204493    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:41:52.219638    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.223951    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:41:52.243159    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.260005    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.264353    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.281662    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:41:52.297828    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.301928    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:41:52.320845    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:41:52.344713    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:52.568408    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.273580    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.519011    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.597190    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.657031    7944 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:41:53.662643    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.162433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.661965    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.162165    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.162422    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.662001    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.162515    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.662491    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.162857    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.662457    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.161782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.162336    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.662670    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.161692    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.663703    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.163358    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.663185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.161803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.663829    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.166542    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.662220    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.162702    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.662389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.162800    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.662296    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.162770    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.662185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.163484    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.662101    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.163166    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.661850    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.163219    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.662450    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.163350    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.661443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.162140    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.662908    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.162389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.662815    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.162317    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.662985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.161953    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.662582    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.162711    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.662384    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.163213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.662951    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.162863    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.162301    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.664439    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.162163    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.663035    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.163263    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.663152    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.161955    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.663328    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.162424    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.662868    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.162408    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.663167    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.162910    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.662394    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.162371    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.662162    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.161992    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.662354    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.162558    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.663353    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.162056    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.662442    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.162717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.662828    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.162856    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.662970    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.162077    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.662936    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.163640    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.662803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.163131    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.662216    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.162136    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.162086    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.663084    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.161766    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.664543    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.162298    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.662872    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.162985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.663388    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.162888    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.662630    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.163272    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.662830    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.163249    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.163651    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.662883    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.163502    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.162911    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.663838    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.163526    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.663376    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.163496    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.662662    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.163562    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.663717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.163610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.662532    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.163860    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.663359    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.162827    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.663347    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.162765    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.663289    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.163097    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.661774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:53.693561    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.693561    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:53.697663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:53.729976    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.729976    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:53.733954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:53.762808    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.762808    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:53.767775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:53.797017    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.797017    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:53.800693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:53.829028    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.829028    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:53.832681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:53.860730    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.860730    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:53.864375    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:53.893858    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.893858    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:53.893858    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:53.893858    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:53.958662    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:53.958662    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:53.990110    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:53.990110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:54.075886    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:54.075886    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:54.075886    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:54.124100    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:54.124100    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:56.693664    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:56.717550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:56.749444    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.749476    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:56.753285    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:56.784073    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.784073    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:56.788320    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:56.817232    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.817232    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:56.821873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:56.853120    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.853120    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:56.857160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:56.887514    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.887514    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:56.891198    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:56.922568    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.922636    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:56.925831    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:56.954531    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.954531    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:56.954531    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:56.954531    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:57.019098    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:57.019098    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:57.050929    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:57.050955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:57.138578    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:57.138578    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:57.138578    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:57.182851    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:57.182851    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:59.736560    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:59.756547    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:59.785666    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.785666    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:59.789191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:59.818090    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.818151    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:59.821701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:59.849198    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.849198    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:59.852824    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:59.880565    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.880565    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:59.884161    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:59.915009    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.915009    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:59.918550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:59.949230    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.949230    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:59.953371    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:59.979962    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.979962    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:59.979962    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:59.979962    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:00.044543    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:00.044543    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:00.075045    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:00.075045    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:00.184096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:00.184096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:00.184096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:00.229125    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:00.229125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:02.788235    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:02.812066    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:02.844035    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.844035    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:02.847391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:02.879346    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.879346    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:02.883507    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:02.911508    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.911573    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:02.915132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:02.944186    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.944186    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:02.948177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:02.977489    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.977489    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:02.980961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:03.009657    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.009657    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:03.013587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:03.042816    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.042816    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:03.042816    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:03.042816    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:03.126456    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:03.126456    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:03.126456    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:03.167566    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:03.167566    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:03.219094    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:03.219094    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:03.285299    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:03.285299    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:05.820619    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:05.845854    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:05.875867    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.875867    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:05.879229    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:05.909558    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.909558    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:05.912556    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:05.942200    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.942273    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:05.945627    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:05.975289    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.975289    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:05.979052    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:06.009570    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.009570    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:06.013210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:06.042977    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.042977    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:06.046640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:06.075849    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.075849    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:06.075849    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:06.075849    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:06.120266    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:06.120266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:06.168821    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:06.168821    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:06.230879    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:06.230879    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:06.260885    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:06.260885    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:06.340031    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:08.845285    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:08.868682    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:08.897291    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.897291    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:08.900871    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:08.928001    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.928001    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:08.931488    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:08.961792    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.961792    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:08.965426    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:08.994180    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.994253    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:08.997983    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:09.026539    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.026539    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:09.030228    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:09.061065    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.061094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:09.064483    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:09.093815    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.093815    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:09.093815    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:09.093815    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:09.173989    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:09.174037    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:09.174037    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:09.214846    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:09.214846    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:09.269685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:09.269685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:09.331802    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:09.331802    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:11.869149    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:11.892656    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:11.921635    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.921635    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:11.926449    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:11.957938    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.957938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:11.961505    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:11.991894    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.991894    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:11.995992    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:12.025039    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.025039    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:12.029016    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:12.060459    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.060459    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:12.064652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:12.096164    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.096164    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:12.100038    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:12.129762    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.129824    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:12.129824    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:12.129824    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:12.194950    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:12.194950    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:12.227435    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:12.227435    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:12.311750    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:12.311750    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:12.311750    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:12.352387    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:12.352387    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:14.907650    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:14.933011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:14.961340    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.961340    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:14.964869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:14.991179    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.991179    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:14.996502    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:15.025325    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.025325    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:15.031024    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:15.058452    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.058452    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:15.062691    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:15.091232    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.091232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:15.096528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:15.127551    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.127551    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:15.131605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:15.161113    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.161113    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:15.161113    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:15.161113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:15.189644    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:15.189644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:15.270306    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:15.270306    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:15.270306    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:15.311714    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:15.311714    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:15.371391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:15.371391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:17.939209    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:17.962095    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:17.990273    7944 logs.go:282] 0 containers: []
	W1217 00:43:17.990273    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:17.993918    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:18.025229    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.025229    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:18.029538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:18.060092    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.060092    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:18.064444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:18.095199    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.095230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:18.098808    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:18.129658    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.129658    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:18.133236    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:18.163628    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.163628    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:18.167493    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:18.199253    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.199253    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:18.199253    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:18.199253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:18.252203    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:18.252203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:18.316097    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:18.316097    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:18.347393    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:18.347393    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:18.426495    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:18.426495    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:18.426495    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:20.972950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:20.998624    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:21.025837    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.025837    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:21.029315    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:21.061085    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.061085    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:21.065387    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:21.092871    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.092871    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:21.096706    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:21.126179    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.126179    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:21.129834    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:21.159720    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.159720    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:21.163263    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:21.193011    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.193011    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:21.196667    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:21.229222    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.229222    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:21.229222    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:21.229222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:21.279391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:21.279391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:21.341649    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:21.341649    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:21.372055    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:21.372055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:21.451011    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:21.451011    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:21.451011    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.011538    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:24.037171    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:24.067520    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.067544    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:24.070755    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:24.101421    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.101454    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:24.104927    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:24.133336    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.133336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:24.137178    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:24.164662    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.164662    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:24.168324    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:24.200218    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.200218    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:24.203764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:24.234603    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.234603    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:24.238011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:24.267400    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.267400    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:24.267400    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:24.267400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:24.348263    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:24.348263    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:24.348263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.393298    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:24.393298    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:24.446709    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:24.446709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:24.518891    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:24.518891    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.054877    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:27.078747    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:27.111142    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.111142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:27.114844    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:27.143801    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.143801    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:27.147663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:27.176215    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.176215    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:27.179758    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:27.208587    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.208587    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:27.211873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:27.241061    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.241061    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:27.244905    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:27.276011    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.276065    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:27.279281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:27.309068    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.309068    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:27.309068    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:27.309068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:27.372079    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:27.372079    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.403215    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:27.403215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:27.502209    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:27.502209    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:27.502209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:27.543251    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:27.543251    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.103213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:30.126929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:30.158148    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.158148    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:30.162286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:30.191927    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.191927    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:30.195748    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:30.225040    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.225040    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:30.229444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:30.260498    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.260498    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:30.264750    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:30.293312    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.293312    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:30.296869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:30.325167    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.325167    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:30.328938    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:30.363267    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.363267    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:30.363267    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:30.363267    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:30.393795    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:30.393795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:30.487446    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:30.487446    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:30.487446    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:30.530226    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:30.530226    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.585635    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:30.585635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:33.151438    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:33.175766    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:33.207203    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.207203    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:33.210965    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:33.237795    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.237795    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:33.242087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:33.273041    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.273041    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:33.277103    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:33.305283    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.305283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:33.309730    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:33.337737    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.337737    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:33.341408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:33.370694    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.370694    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:33.374111    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:33.407836    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.407836    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:33.407836    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:33.407836    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:33.434955    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:33.434955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:33.529365    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:33.529365    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:33.529365    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:33.572145    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:33.572145    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:33.624502    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:33.624502    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.189426    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:36.213378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:36.243407    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.243407    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:36.246746    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:36.274995    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.274995    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:36.278271    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:36.305533    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.305533    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:36.309459    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:36.338892    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.338892    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:36.342669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:36.373516    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.373516    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:36.377003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:36.404831    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.404831    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:36.408515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:36.437790    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.437790    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:36.437790    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:36.437790    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:36.540076    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:36.540076    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:36.540076    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:36.580664    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:36.580664    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:36.635234    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:36.635234    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.695702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:36.695702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.230926    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:39.255012    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:39.288661    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.288661    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:39.293143    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:39.320903    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.320967    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:39.324725    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:39.350161    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.350161    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:39.353696    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:39.380073    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.380073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:39.383515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:39.411510    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.411510    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:39.415491    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:39.449683    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.449683    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:39.453620    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:39.487800    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.487800    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:39.487800    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:39.487800    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:39.552943    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:39.552943    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.582035    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:39.583033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:39.660499    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:39.660499    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:39.660499    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:39.705645    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:39.705645    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:42.267731    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:42.297885    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:42.329299    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.329326    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:42.332959    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:42.361173    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.361173    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:42.365107    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:42.393236    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.393236    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:42.397363    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:42.430949    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.430949    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:42.435377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:42.465696    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.465696    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:42.468849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:42.512182    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.512182    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:42.515699    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:42.545680    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.545680    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:42.545680    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:42.545680    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:42.607372    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:42.607372    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:42.637761    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:42.637761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:42.720140    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:42.720140    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:42.720140    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:42.760712    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:42.760712    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.318861    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:45.345331    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:45.376136    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.376136    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:45.379539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:45.408720    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.408720    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:45.412623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:45.444664    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.444664    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:45.448226    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:45.484195    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.484195    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:45.488022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:45.515242    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.515242    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:45.519184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:45.551260    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.551260    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:45.554894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:45.581795    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.581795    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:45.581795    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:45.581795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:45.625880    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:45.625880    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.678280    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:45.678280    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:45.738938    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:45.738938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:45.770054    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:45.770054    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:45.854057    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.359806    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:48.384092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:48.415158    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.415192    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:48.418996    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:48.446149    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.446149    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:48.449676    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:48.487416    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.487416    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:48.491652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:48.520073    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.520073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:48.524101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:48.550421    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.550421    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:48.554497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:48.583643    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.583666    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:48.587154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:48.616812    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.616812    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:48.616812    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:48.616812    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:48.681323    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:48.681323    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:48.712866    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:48.712866    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:48.798447    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.798447    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:48.798447    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:48.839546    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:48.839546    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.393802    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:51.419527    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:51.453783    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.453783    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:51.457619    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:51.496053    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.496053    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:51.499949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:51.528492    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.528492    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:51.531946    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:51.560363    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.560363    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:51.563875    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:51.597143    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.597143    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:51.600764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:51.630459    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.630459    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:51.634473    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:51.667072    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.667072    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:51.667072    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:51.667072    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.719154    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:51.719154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:51.779761    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:51.779761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:51.810036    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:51.810036    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:51.887952    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:51.887952    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:51.887952    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.434243    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:54.457541    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:54.486698    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.486698    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:54.491137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:54.520500    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.520500    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:54.524176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:54.552487    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.552487    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:54.556310    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:54.585424    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.585424    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:54.588683    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:54.619901    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.619970    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:54.623608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:54.655623    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.655706    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:54.658833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:54.690413    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.690413    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:54.690413    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:54.690492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:54.771466    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:54.771466    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:54.771466    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.813307    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:54.813307    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:54.874633    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:54.875154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:54.937630    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:54.937630    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.472782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:57.497186    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:57.526677    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.526745    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:57.530218    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:57.557916    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.557948    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:57.562041    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:57.590924    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.590924    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:57.594569    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:57.621738    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.621738    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:57.627319    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:57.656111    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.656111    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:57.659689    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:57.690217    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.690217    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:57.693915    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:57.723629    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.723629    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:57.723629    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:57.723688    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:57.788129    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:57.788129    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.818809    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:57.818809    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:57.903055    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:57.903055    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:57.903055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:57.944153    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:57.944153    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.501950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:00.530348    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:00.561749    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.562270    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:00.566179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:00.596812    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.596812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:00.600551    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:00.628898    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.628898    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:00.632187    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:00.661210    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.661255    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:00.664477    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:00.692625    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.692625    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:00.696565    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:00.727420    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.727420    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:00.731176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:00.761041    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.761041    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:00.761041    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:00.761041    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.813195    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:00.813286    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:00.875819    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:00.875819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:00.906004    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:00.906004    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:00.995354    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:00.995354    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:00.995354    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:03.542659    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:03.566401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:03.597875    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.597875    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:03.602087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:03.631114    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.631114    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:03.635275    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:03.664437    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.665863    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:03.669211    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:03.697100    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.697100    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:03.701535    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:03.731200    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.731200    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:03.735391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:03.764893    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.764893    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:03.768303    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:03.799245    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.799245    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:03.799245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:03.799245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:03.863068    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:03.863068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:03.892825    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:03.892825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:03.975253    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:03.975253    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:03.975253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:04.016164    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:04.016164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:06.571695    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:06.597029    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:06.627889    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.627889    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:06.631611    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:06.661118    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.661118    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:06.664736    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:06.694336    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.694336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:06.698523    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:06.728693    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.728693    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:06.732767    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:06.762060    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.762130    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:06.765313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:06.795222    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.795222    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:06.799233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:06.829491    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.829525    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:06.829525    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:06.829558    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:06.858476    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:06.858476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:06.938014    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:06.938014    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:06.938014    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:06.978960    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:06.978960    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:07.027942    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:07.027942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:09.595591    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:09.619202    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:09.648727    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.648727    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:09.653265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:09.684682    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.684682    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:09.688140    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:09.715249    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.715249    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:09.718566    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:09.749969    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.749969    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:09.753003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:09.779832    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.779832    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:09.783608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:09.812286    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.812326    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:09.816849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:09.845801    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.845801    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:09.845801    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:09.845801    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:09.890276    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:09.891278    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:09.945030    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:09.945030    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:10.007215    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:10.007215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:10.037318    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:10.037318    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:10.122162    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.627660    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:12.651516    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:12.684952    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.684952    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:12.688749    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:12.717327    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.717327    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:12.721146    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:12.749548    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.749548    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:12.752616    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:12.784015    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.784015    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:12.787596    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:12.817388    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.817388    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:12.821554    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:12.849737    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.849737    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:12.853589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:12.882735    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.882735    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:12.882735    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:12.882735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:12.966389    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.966389    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:12.966389    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:13.009759    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:13.009759    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:13.057767    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:13.057767    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:13.121685    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:13.121685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:15.659014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:15.683463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:15.714834    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.714857    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:15.718351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:15.749782    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.749812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:15.753368    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:15.782321    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.782321    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:15.785961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:15.816416    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.816416    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:15.822152    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:15.848733    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.848791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:15.852246    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:15.881272    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.881310    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:15.886378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:15.917818    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.917818    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:15.917892    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:15.917892    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:15.983033    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:15.983033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:16.015133    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:16.015133    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:16.105395    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:16.105395    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:16.105438    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:16.146209    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:16.146209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:18.701433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:18.725475    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:18.759149    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.759149    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:18.762892    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:18.795437    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.795437    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:18.799127    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:18.835050    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.835580    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:18.839967    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:18.867222    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.867222    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:18.870583    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:18.899263    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.899263    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:18.902802    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:18.934115    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.934115    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:18.937420    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:18.969205    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.969205    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:18.969205    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:18.969205    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:19.030841    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:19.030841    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:19.061419    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:19.061938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:19.143852    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:19.143852    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:19.143852    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:19.187635    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:19.187709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:21.747174    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:21.771176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:21.800995    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.800995    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:21.804142    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:21.836064    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.836131    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:21.839865    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:21.868223    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.868292    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:21.871954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:21.900714    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.900714    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:21.904281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:21.931611    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.931611    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:21.935666    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:21.963188    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.963188    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:21.967538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:21.994527    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.994527    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:21.994527    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:21.994527    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:22.061635    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:22.061635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:22.093213    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:22.093213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:22.179644    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:22.179644    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:22.179644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:22.223092    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:22.223092    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:24.783065    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:24.806396    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:24.838512    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.838512    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:24.842023    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:24.871052    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.871052    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:24.874639    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:24.903466    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.903466    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:24.906973    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:24.938000    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.938000    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:24.942149    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:24.970337    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.970371    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:24.973308    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:25.003460    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.003460    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:25.007008    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:25.035638    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.035638    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:25.035638    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:25.035638    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:25.097833    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:25.097833    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:25.128758    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:25.128758    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:25.209843    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:25.209843    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:25.209843    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:25.250600    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:25.250600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:27.806610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:27.831257    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:27.864142    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.864142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:27.867995    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:27.897561    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.897561    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:27.900925    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:27.931079    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.931079    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:27.934151    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:27.964321    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.964321    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:27.969534    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:27.999709    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.999709    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:28.002966    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:28.034961    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.035008    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:28.038649    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:28.067733    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.067733    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:28.067733    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:28.067733    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:28.150573    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:28.150573    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:28.150573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:28.192203    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:28.192203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:28.248534    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:28.248624    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:28.306585    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:28.306585    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:30.842138    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:30.867340    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:30.899142    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.899142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:30.903037    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:30.932057    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.932057    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:30.938184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:30.965554    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.965554    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:30.969154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:30.997999    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.997999    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:31.001861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:31.031079    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.031142    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:31.034735    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:31.063582    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.063582    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:31.069235    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:31.098869    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.098948    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:31.098948    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:31.098948    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:31.127253    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:31.127253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:31.211541    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:31.211541    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:31.211541    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:31.258478    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:31.258478    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:31.308932    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:31.308932    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:33.876600    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:33.899781    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:33.930969    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.930969    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:33.934621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:33.964938    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.964938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:33.968775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:33.998741    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.998793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:34.002265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:34.030279    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.030279    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:34.034177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:34.063244    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.063244    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:34.066512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:34.095842    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.095842    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:34.099843    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:34.133173    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.133173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:34.133173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:34.133173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:34.198297    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:34.198297    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:34.229134    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:34.229134    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:34.305327    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:34.305327    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:34.305327    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:34.346912    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:34.346912    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:36.903423    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:36.929005    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:36.959255    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.959255    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:36.962841    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:36.991016    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.991016    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:36.995294    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:37.027615    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.027615    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:37.031225    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:37.063793    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.063793    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:37.067539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:37.098257    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.098257    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:37.104945    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:37.135094    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.135094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:37.139494    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:37.170825    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.170825    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:37.170825    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:37.170825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:37.236025    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:37.236025    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:37.266143    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:37.266143    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:37.356401    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:37.356401    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:37.356401    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:37.397010    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:37.397010    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:39.951831    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:39.975669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:40.007629    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.007629    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:40.011435    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:40.041534    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.041534    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:40.045543    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:40.072927    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.072927    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:40.076835    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:40.104604    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.104604    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:40.108678    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:40.136644    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.136644    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:40.140732    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:40.172579    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.172579    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:40.176191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:40.207078    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.207078    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:40.207078    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:40.207171    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:40.271921    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:40.271921    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:40.302650    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:40.302650    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:40.384552    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:40.384552    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:40.384552    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:40.425377    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:40.425377    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:42.980281    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:43.003860    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:43.036168    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.036168    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:43.040136    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:43.068891    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.068891    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:43.072976    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:43.103823    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.103823    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:43.107774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:43.134339    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.134339    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:43.137929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:43.168166    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.168166    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:43.172279    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:43.200333    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.200333    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:43.204183    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:43.236225    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.236225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:43.236225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:43.236225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:43.280577    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:43.280577    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:43.331604    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:43.331604    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:43.392357    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:43.392357    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:43.423125    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:43.423125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:43.508115    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.013886    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:46.042290    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:46.074707    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.074707    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:46.078216    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:46.109309    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.109309    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:46.112661    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:46.141002    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.141002    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:46.144585    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:46.172550    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.172550    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:46.178681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:46.209054    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.209054    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:46.212761    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:46.242212    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.242212    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:46.245894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:46.273677    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.273677    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:46.273719    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:46.273719    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:46.339840    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:46.339840    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:46.373287    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:46.373287    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:46.452686    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.452686    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:46.452686    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:46.498608    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:46.498608    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.050761    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:49.075428    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:49.105673    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.105673    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:49.109924    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:49.140245    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.140245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:49.143980    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:49.175115    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.175115    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:49.181267    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:49.213667    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.213667    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:49.217486    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:49.249277    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.249277    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:49.252880    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:49.279244    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.279287    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:49.282893    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:49.313826    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.313826    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:49.313826    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:49.313826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:49.395270    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:49.395270    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:49.395270    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:49.439990    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:49.439990    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.493048    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:49.493048    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:49.555675    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:49.555675    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.091191    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:52.121154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:52.152807    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.152807    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:52.157047    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:52.185793    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.185793    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:52.188792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:52.217804    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.218793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:52.221792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:52.253749    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.253749    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:52.257528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:52.286783    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.286783    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:52.290341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:52.319799    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.319799    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:52.323376    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:52.351656    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.351656    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:52.351656    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:52.351656    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:52.395381    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:52.395381    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:52.449049    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:52.449049    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:52.511942    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:52.511942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.541707    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:52.541707    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:52.622537    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.130052    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:55.154497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:55.185053    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.185086    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:55.188968    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:55.215935    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.215935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:55.220385    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:55.249124    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.249159    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:55.253058    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:55.282148    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.282230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:55.285701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:55.315081    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.315081    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:55.320240    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:55.350419    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.350449    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:55.353993    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:55.386346    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.386346    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:55.386346    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:55.386346    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:55.463518    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.463518    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:55.463518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:55.502884    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:55.502884    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:55.567300    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:55.567300    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:55.630547    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:55.630547    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.165717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:58.189522    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:58.223415    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.223415    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:58.227138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:58.256133    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.256133    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:58.259919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:58.289751    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.289751    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:58.293341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:58.323835    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.323835    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:58.327981    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:58.358897    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.358897    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:58.362525    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:58.393696    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.393696    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:58.397786    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:58.426810    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.426810    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:58.426810    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:58.426810    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:58.492668    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:58.492668    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.523854    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:58.523854    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:58.609164    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:58.609164    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:58.609164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:58.654356    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:58.654356    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.211859    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:01.236949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:01.268645    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.268645    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:01.273856    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:01.305336    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.305336    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:01.309133    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:01.339056    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.339056    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:01.343432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:01.373802    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.373802    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:01.378587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:01.408624    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.408624    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:01.414210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:01.446499    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.446499    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:01.450189    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:01.479782    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.479782    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:01.479782    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:01.479829    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.526819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:01.526819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:01.591797    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:01.591797    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:01.624206    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:01.624206    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:01.713187    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:01.713187    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:01.713187    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.261443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:04.286201    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:04.315610    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.315610    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:04.319607    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:04.348007    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.348007    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:04.351825    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:04.378854    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.378854    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:04.382430    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:04.414385    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.414385    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:04.419751    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:04.447734    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.447734    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:04.452650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:04.483414    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.483414    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:04.488519    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:04.520173    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.520173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:04.520173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:04.520173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:04.583573    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:04.583573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:04.615102    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:04.615102    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:04.703186    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:04.703186    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:04.703186    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.745696    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:04.745696    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:07.302305    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:07.327138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:07.357072    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.357072    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:07.361245    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:07.393135    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.393135    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:07.397020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:07.426598    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.426623    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:07.430259    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:07.459216    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.459216    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:07.463233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:07.491206    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.491206    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:07.496432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:07.527082    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.527082    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:07.530080    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:07.563609    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.563609    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:07.563609    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:07.563609    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:07.624175    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:07.624175    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:07.654046    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:07.655373    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:07.733760    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:07.733760    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:07.733760    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:07.775826    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:07.775826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.333009    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:10.359433    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:10.394281    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.394281    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:10.399772    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:10.431921    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.431921    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:10.435941    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:10.466929    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.466929    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:10.469952    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:10.500979    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.500979    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:10.504132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:10.532972    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.532972    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:10.536526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:10.565609    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.565609    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:10.569307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:10.597263    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.597263    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:10.597263    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:10.597263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:10.625496    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:10.625496    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:10.716452    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:10.716452    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:10.716535    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:10.757898    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:10.757898    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.807685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:10.807685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.376757    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:13.401022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:13.433179    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.433179    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:13.438943    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:13.466315    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.466315    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:13.469406    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:13.498170    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.498170    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:13.503463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:13.531045    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.531045    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:13.534623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:13.563549    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.563572    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:13.567173    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:13.595412    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.595412    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:13.599138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:13.627347    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.627347    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:13.627347    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:13.627347    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.687440    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:13.688440    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:13.718641    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:13.718785    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:13.801949    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:13.801949    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:13.801949    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:13.846773    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:13.847288    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.401019    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:16.426837    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:16.461985    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.461985    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:16.465693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:16.494330    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.494354    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:16.497490    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:16.527742    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.527742    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:16.531287    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:16.561095    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.561095    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:16.564902    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:16.594173    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.594173    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:16.597642    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:16.627598    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.627598    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:16.630884    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:16.659950    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.660031    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:16.660031    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:16.660031    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:16.740660    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:16.740692    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:16.740692    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:16.782319    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:16.782319    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.835245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:16.835245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:16.900147    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:16.900147    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.437638    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:19.462468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:19.493244    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.493244    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:19.497367    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:19.526430    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.526430    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:19.530589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:19.559166    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.559222    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:19.562429    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:19.594311    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.594311    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:19.597936    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:19.627339    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.627339    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:19.632033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:19.659648    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.659648    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:19.663351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:19.696628    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.696628    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:19.696628    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:19.696628    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:19.749701    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:19.749701    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:19.809018    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:19.809018    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.838771    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:19.838771    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:19.921290    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:19.921290    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:19.921290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.468833    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:22.494625    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:22.526034    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.526034    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:22.529623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:22.565289    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.565289    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:22.569286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:22.597280    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.597280    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:22.601010    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:22.630330    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.630330    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:22.634511    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:22.663939    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.663939    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:22.667575    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:22.696762    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.696792    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:22.700137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:22.732285    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.732285    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:22.732285    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:22.732285    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:22.814702    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:22.814702    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:22.814702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.864515    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:22.864515    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:22.917896    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:22.917896    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:22.984213    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:22.984213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.517090    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:25.542531    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:25.575294    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.575294    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:25.579526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:25.610041    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.610041    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:25.614160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:25.643682    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.643712    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:25.647264    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:25.679557    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.679557    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:25.685184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:25.712791    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.712791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:25.716775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:25.747803    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.747803    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:25.751621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:25.782130    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.782130    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:25.782130    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:25.782130    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:25.833735    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:25.833735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:25.894476    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:25.894476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.925218    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:25.925218    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:26.009195    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:26.009195    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:26.009195    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.558504    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:28.581900    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:28.615041    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.615041    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:28.619020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:28.647386    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.647386    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:28.651512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:28.679029    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.679029    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:28.682977    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:28.714035    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.714035    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:28.717407    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:28.746896    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.746920    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:28.749895    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:28.782541    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.782574    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:28.786249    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:28.813250    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.813250    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:28.813250    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:28.813250    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:28.891492    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:28.891492    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:28.891492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.934039    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:28.934039    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:28.986066    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:28.986066    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:29.044402    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:29.045400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.579014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:31.605723    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:31.639437    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.639437    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:31.643001    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:31.672858    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.672858    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:31.676418    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:31.706815    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.706815    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:31.711450    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:31.739165    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.739165    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:31.742794    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:31.774213    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.774213    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:31.778092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:31.808021    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.808021    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:31.811911    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:31.841111    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.841174    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:31.841207    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:31.841207    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:31.903600    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:31.903600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.934979    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:31.934979    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:32.016581    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:32.016581    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:32.016581    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:32.059137    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:32.059137    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:34.619048    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:34.642906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:34.676541    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.676541    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:34.680839    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:34.710245    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.710245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:34.715809    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:34.754209    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.754227    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:34.757792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:34.787283    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.787283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:34.790335    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:34.823758    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.823758    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:34.827394    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:34.856153    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.856153    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:34.859978    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:34.890024    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.890024    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:34.890024    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:34.890024    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:34.954222    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:34.954222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:34.985196    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:34.985196    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:35.067666    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:35.067666    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:35.067666    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:35.109711    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:35.109711    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:37.664972    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:37.687969    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:37.717956    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.717956    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:37.721553    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:37.750935    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.750935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:37.755377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:37.786480    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.786480    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:37.790806    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:37.821246    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.821246    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:37.825408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:37.854559    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.854559    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:37.858605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:37.888189    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.888189    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:37.892436    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:37.923454    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.923454    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:37.923454    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:37.923454    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:37.990022    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:37.990022    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:38.021197    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:38.021197    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:38.107061    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:38.107061    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:38.107061    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:38.150052    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:38.150052    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:40.710598    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:40.738050    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:40.769637    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.769637    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:40.773468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:40.810478    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.810478    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:40.814079    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:40.848071    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.848071    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:40.851868    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:40.880725    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.880725    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:40.884928    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:40.915221    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.915221    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:40.919101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:40.951097    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.951097    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:40.955307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:40.990856    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.990901    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:40.990901    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:40.990901    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:41.041987    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:41.042028    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:41.104560    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:41.104560    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:41.134782    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:41.134782    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:41.221096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:41.221096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:41.221096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:43.768841    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:43.807393    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:43.840153    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.840153    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:43.843740    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:43.873589    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.873589    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:43.877086    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:43.906593    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.906593    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:43.910563    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:43.940004    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.940004    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:43.944461    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:43.984818    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.984818    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:43.988580    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:44.016481    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.016481    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:44.020610    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:44.050198    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.050225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:44.050225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:44.050225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:44.096362    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:44.096362    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:44.150219    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:44.150219    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:44.209135    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:44.209135    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:44.240518    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:44.240518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:44.328383    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:46.833977    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:46.856919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:46.889480    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.889480    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:46.893215    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:46.924373    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.924373    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:46.928774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:46.961004    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.961004    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:46.964726    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:47.003673    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.003673    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:47.006719    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:47.040232    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.040232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:47.044112    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:47.074796    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.074796    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:47.078313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:47.109819    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.109819    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:47.109819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:47.109819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:47.173702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:47.174703    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:47.204290    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:47.204290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:47.290268    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:47.290268    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:47.290268    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:47.332308    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:47.332308    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:49.890367    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:49.913613    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:49.943685    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.943685    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:49.947685    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:49.975458    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.975458    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:49.979401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:50.010709    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.010709    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:50.014179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:50.046146    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.046146    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:50.050033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:50.082525    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.082525    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:50.085833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:50.113901    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.113943    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:50.117783    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:50.148202    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.148290    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:50.148290    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:50.148290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:50.208056    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:50.208056    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:50.239113    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:50.239113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:50.326281    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:50.326281    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:50.326281    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:50.369080    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:50.369080    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:52.932111    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:52.956351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:52.989854    7944 logs.go:282] 0 containers: []
	W1217 00:45:52.989854    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:52.995118    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:53.022557    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.022557    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:53.027906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:53.062035    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.062035    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:53.065640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:53.096245    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.096245    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:53.100861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:53.131945    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.131945    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:53.135650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:53.164825    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.164825    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:53.168602    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:53.198961    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.198961    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:53.198961    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:53.198961    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:53.260266    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:53.260266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:53.290682    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:53.290682    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:53.375669    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:53.375669    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:53.375669    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:53.416110    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:53.416110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:55.971979    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:55.991052    7944 kubeadm.go:602] duration metric: took 4m3.9896216s to restartPrimaryControlPlane
	W1217 00:45:55.991052    7944 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:45:55.996485    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:45:56.479923    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:45:56.502762    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:45:56.518662    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:45:56.523597    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:45:56.536371    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:45:56.536371    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:45:56.541198    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:45:56.554668    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:45:56.559154    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:45:56.576197    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:45:56.590283    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:45:56.594634    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:45:56.612520    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.626118    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:45:56.631259    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.648494    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:45:56.661811    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:45:56.665826    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:45:56.684539    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:45:56.809159    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:45:56.895277    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:45:56.990840    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:49:57.581295    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:49:57.581442    7944 kubeadm.go:319] 
	I1217 00:49:57.581498    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:49:57.586513    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:49:57.586513    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:49:57.587141    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:49:57.587141    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:49:57.587666    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:49:57.588407    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:49:57.589479    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:49:57.589618    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:49:57.589771    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:49:57.589895    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:49:57.589957    7944 kubeadm.go:319] OS: Linux
	I1217 00:49:57.590117    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:49:57.590205    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:49:57.590849    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:49:57.591066    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:49:57.591250    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:49:57.591469    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:49:57.591654    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:49:57.594374    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:49:57.598936    7944 out.go:252]   - Booting up control plane ...
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:49:57.599930    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001130665s
	I1217 00:49:57.599930    7944 kubeadm.go:319] 
	I1217 00:49:57.599930    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:49:57.599930    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:49:57.600944    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:49:57.600944    7944 kubeadm.go:319] 
	I1217 00:49:57.601093    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 
	W1217 00:49:57.601093    7944 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001130665s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:49:57.606482    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:49:58.061133    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:49:58.080059    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:49:58.085171    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:49:58.098234    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:49:58.098234    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:49:58.102655    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:49:58.116544    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:49:58.121754    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:49:58.141782    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:49:58.155836    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:49:58.159790    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:49:58.177864    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.192169    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:49:58.196436    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.213653    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:49:58.227417    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:49:58.231893    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:49:58.251588    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:49:58.366677    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:49:58.451159    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:49:58.548545    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:53:59.244804    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:53:59.244874    7944 kubeadm.go:319] 
	I1217 00:53:59.245013    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:53:59.252131    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:53:59.252131    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:53:59.253316    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:53:59.253422    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:53:59.255258    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:53:59.255381    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:53:59.255513    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:53:59.255633    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:53:59.255694    7944 kubeadm.go:319] OS: Linux
	I1217 00:53:59.255790    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:53:59.255877    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:53:59.255998    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:53:59.256094    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:53:59.256215    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:53:59.256364    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:53:59.256426    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:53:59.256548    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:53:59.256670    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:53:59.256888    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:53:59.257050    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:53:59.257070    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:53:59.257070    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:53:59.272325    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:53:59.272325    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:53:59.273353    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:53:59.273480    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:53:59.273606    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:53:59.273733    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:53:59.273865    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:53:59.274182    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:53:59.274309    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:53:59.274434    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:53:59.274560    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:53:59.274685    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:53:59.274812    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:53:59.274938    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:53:59.275063    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:53:59.277866    7944 out.go:252]   - Booting up control plane ...
	I1217 00:53:59.277866    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:53:59.279865    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:53:59.280054    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:53:59.280189    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000873338s
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:53:59.280712    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 
	I1217 00:53:59.280785    7944 kubeadm.go:403] duration metric: took 12m7.3287248s to StartCluster
	I1217 00:53:59.280785    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:59.285017    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:59.529112    7944 cri.go:89] found id: ""
	I1217 00:53:59.529112    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.529112    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:59.529112    7944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:53:59.533754    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:59.574863    7944 cri.go:89] found id: ""
	I1217 00:53:59.574863    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.574863    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:53:59.574863    7944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:53:59.579181    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:59.620688    7944 cri.go:89] found id: ""
	I1217 00:53:59.620688    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.620688    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:53:59.620688    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:59.627987    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:59.676059    7944 cri.go:89] found id: ""
	I1217 00:53:59.676114    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.676114    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:59.676114    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:59.680719    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:59.723707    7944 cri.go:89] found id: ""
	I1217 00:53:59.723707    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.723707    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:59.723707    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:59.729555    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:59.774476    7944 cri.go:89] found id: ""
	I1217 00:53:59.774476    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.774560    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:59.774560    7944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:59.780477    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:59.820909    7944 cri.go:89] found id: ""
	I1217 00:53:59.820909    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.820909    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:59.820909    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:59.820909    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:59.893583    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:59.893583    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:59.926154    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:59.926154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.179462    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.179462    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:54:00.179462    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:54:00.221875    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.221875    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 00:54:00.281055    7944 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:54:00.281122    7944 out.go:285] * 
	W1217 00:54:00.281210    7944 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.281448    7944 out.go:285] * 
	W1217 00:54:00.283315    7944 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:54:00.296133    7944 out.go:203] 
	W1217 00:54:00.298699    7944 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.299289    7944 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:54:00.299350    7944 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:54:00.301526    7944 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799347277Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799352978Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799377780Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799412283Z" level=info msg="Initializing buildkit"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.911073637Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918044834Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918252552Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918284354Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918293455Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:48 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:41:49 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:41:49 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:02.292686   40955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:02.293987   40955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:02.295015   40955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:02.296059   40955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:02.297463   40955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:41] CPU: 8 PID: 65919 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000795] RIP: 0033:0x7fc513f26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fc513f26af6.
	[  +0.000661] RSP: 002b:00007ffce9a430e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000957] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000792] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000787] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001172] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001257] FS:  0000000000000000 GS:  0000000000000000
	[  +0.952455] CPU: 6 PID: 66046 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000828] RIP: 0033:0x7f7de767eb20
	[  +0.000402] Code: Unable to access opcode bytes at RIP 0x7f7de767eaf6.
	[  +0.000691] RSP: 002b:00007ffdccfc39b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000866] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000810] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001071] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001218] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001105] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001100] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:54:02 up  1:13,  0 user,  load average: 0.29, 0.36, 0.45
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:53:58 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:53:59 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 00:53:59 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:53:59 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:53:59 functional-409700 kubelet[40719]: E1217 00:53:59.688707   40719 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:53:59 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:53:59 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:54:00 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 00:54:00 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:00 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:00 functional-409700 kubelet[40807]: E1217 00:54:00.451192   40807 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:54:00 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:54:00 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:54:01 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 00:54:01 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:01 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:01 functional-409700 kubelet[40835]: E1217 00:54:01.182747   40835 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:54:01 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:54:01 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:54:01 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 00:54:01 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:01 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:01 functional-409700 kubelet[40860]: E1217 00:54:01.935356   40860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:54:01 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:54:01 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (579.7101ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ExtraConfig (741.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (54.41s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-409700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: (dbg) Non-zero exit: kubectl --context functional-409700 get po -l tier=control-plane -n kube-system -o=json: exit status 1 (50.3619712s)

                                                
                                                
** stderr ** 
	E1217 00:54:14.110273    4832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:54:24.194806    4832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:54:34.239493    4832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:54:44.283778    4832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:54:54.324987    4832 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:827: failed to get components. args "kubectl --context functional-409700 get po -l tier=control-plane -n kube-system -o=json": exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (664.656ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.7080205s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                          ARGS                                                           │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh            │ functional-045600 ssh pgrep buildkitd                                                                                   │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │                     │
	│ image          │ functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr                  │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ image          │ functional-045600 image ls                                                                                              │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ update-context │ functional-045600 update-context --alsologtostderr -v=2                                                                 │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:19 UTC │ 17 Dec 25 00:19 UTC │
	│ delete         │ -p functional-045600                                                                                                    │ functional-045600 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │ 17 Dec 25 00:23 UTC │
	│ start          │ -p functional-409700 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker --kubernetes-version=v1.35.0-beta.0 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:23 UTC │                     │
	│ start          │ -p functional-409700 --alsologtostderr -v=8                                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:32 UTC │                     │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.1                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:3.3                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add registry.k8s.io/pause:latest                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache add minikube-local-cache-test:functional-409700                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ functional-409700 cache delete minikube-local-cache-test:functional-409700                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.3                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ list                                                                                                                    │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl images                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo docker rmi registry.k8s.io/pause:latest                                                      │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ cache          │ functional-409700 cache reload                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ ssh            │ functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:3.1                                                                                        │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ cache          │ delete registry.k8s.io/pause:latest                                                                                     │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │ 17 Dec 25 00:39 UTC │
	│ kubectl        │ functional-409700 kubectl -- --context functional-409700 get pods                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:39 UTC │                     │
	│ start          │ -p functional-409700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:41 UTC │                     │
	└────────────────┴─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:41:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:41:42.742737    7944 out.go:360] Setting OutFile to fd 1692 ...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.785452    7944 out.go:374] Setting ErrFile to fd 2032...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.823093    7944 out.go:368] Setting JSON to false
	I1217 00:41:42.826928    7944 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3691,"bootTime":1765928411,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:41:42.827062    7944 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:41:42.832423    7944 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:41:42.834008    7944 notify.go:221] Checking for updates...
	I1217 00:41:42.836028    7944 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:41:42.837747    7944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:41:42.839400    7944 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:41:42.841743    7944 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:41:42.843853    7944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:41:42.846824    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:42.847138    7944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:41:43.032802    7944 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:41:43.036200    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.287623    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.26443223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.290624    7944 out.go:179] * Using the docker driver based on existing profile
	I1217 00:41:43.295624    7944 start.go:309] selected driver: docker
	I1217 00:41:43.295624    7944 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.295624    7944 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:41:43.302622    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.528811    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.511883839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.567003    7944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:41:43.567003    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:43.567003    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:43.567003    7944 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.571110    7944 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:41:43.575004    7944 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:41:43.577924    7944 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:41:43.581930    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:43.581930    7944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:41:43.581930    7944 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:41:43.581930    7944 cache.go:65] Caching tarball of preloaded images
	I1217 00:41:43.582517    7944 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:41:43.582517    7944 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:41:43.582517    7944 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:41:43.660928    7944 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:41:43.660928    7944 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:41:43.660928    7944 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:41:43.660928    7944 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:41:43.660928    7944 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:41:43.660928    7944 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:41:43.660928    7944 fix.go:54] fixHost starting: 
	I1217 00:41:43.667914    7944 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:41:43.723914    7944 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:41:43.723914    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:41:43.726919    7944 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:41:43.726919    7944 machine.go:94] provisionDockerMachine start ...
	I1217 00:41:43.731914    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:43.796916    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:43.796916    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:43.796916    7944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:41:43.969131    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:43.969131    7944 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:41:43.975058    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.033428    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.033980    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.033980    7944 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:41:44.218389    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:44.221624    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.281826    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.282333    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.282333    7944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:41:44.449024    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:44.449024    7944 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:41:44.449024    7944 ubuntu.go:190] setting up certificates
	I1217 00:41:44.449024    7944 provision.go:84] configureAuth start
	I1217 00:41:44.452071    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:44.516121    7944 provision.go:143] copyHostCerts
	I1217 00:41:44.516430    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:41:44.516430    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:41:44.516430    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:41:44.517399    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:41:44.517399    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:41:44.517399    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:41:44.518364    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:41:44.518364    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:41:44.518364    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:41:44.519103    7944 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:41:44.613354    7944 provision.go:177] copyRemoteCerts
	I1217 00:41:44.617354    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:41:44.620354    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.676405    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:44.805633    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:41:44.840310    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:41:44.872497    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:41:44.899304    7944 provision.go:87] duration metric: took 450.2424ms to configureAuth
	I1217 00:41:44.899304    7944 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:41:44.899304    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:44.902693    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.962192    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.962661    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.962688    7944 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:41:45.129265    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:41:45.129265    7944 ubuntu.go:71] root file system type: overlay
	I1217 00:41:45.129265    7944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:41:45.133980    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.191141    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.191583    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.191676    7944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:41:45.381081    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:41:45.384910    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.439634    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.439634    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.439634    7944 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:41:45.639837    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:45.639837    7944 machine.go:97] duration metric: took 1.9128981s to provisionDockerMachine
	I1217 00:41:45.639837    7944 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:41:45.639837    7944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:41:45.643968    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:41:45.647579    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.702256    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:45.830302    7944 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:41:45.840912    7944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:41:45.840912    7944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:41:45.841469    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:41:45.842433    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:41:45.846605    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:41:45.861850    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:41:45.894051    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:41:45.924540    7944 start.go:296] duration metric: took 284.7004ms for postStartSetup
	I1217 00:41:45.929030    7944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:41:45.931390    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.988238    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.118181    7944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:41:46.128256    7944 fix.go:56] duration metric: took 2.4673029s for fixHost
	I1217 00:41:46.128336    7944 start.go:83] releasing machines lock for "functional-409700", held for 2.4673029s
	I1217 00:41:46.132380    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:46.192243    7944 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:41:46.196238    7944 ssh_runner.go:195] Run: cat /version.json
	I1217 00:41:46.196238    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.199443    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.250894    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.252723    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.374927    7944 ssh_runner.go:195] Run: systemctl --version
	W1217 00:41:46.375040    7944 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:41:46.393243    7944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:41:46.405015    7944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:41:46.411122    7944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:41:46.427748    7944 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:41:46.427748    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:46.427748    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:46.428359    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:46.459279    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:41:46.481169    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:41:46.495981    7944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:41:46.501301    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:41:46.522269    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:41:46.543007    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:41:46.564748    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 00:41:46.571173    7944 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:41:46.571173    7944 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:41:46.587140    7944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:41:46.608125    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:41:46.628561    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:41:46.651071    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:41:46.670567    7944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:41:46.691876    7944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:41:46.708884    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:46.907593    7944 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:41:47.157536    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:47.157588    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:47.161701    7944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:41:47.187508    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.211591    7944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:41:47.291331    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.315837    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:41:47.336371    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:47.365154    7944 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:41:47.376814    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:41:47.391947    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:41:47.416863    7944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:41:47.573803    7944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:41:47.742508    7944 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:41:47.742508    7944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:41:47.769569    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:41:47.792419    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:47.926195    7944 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:41:48.924753    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:41:48.948387    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:41:48.972423    7944 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:41:49.001034    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.024808    7944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:41:49.170637    7944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:41:49.341524    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.489502    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:41:49.515161    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:41:49.538565    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.678445    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:41:49.792662    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.810919    7944 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:41:49.817201    7944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:41:49.824745    7944 start.go:564] Will wait 60s for crictl version
	I1217 00:41:49.829680    7944 ssh_runner.go:195] Run: which crictl
	I1217 00:41:49.841215    7944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:41:49.886490    7944 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:41:49.890545    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.932656    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.973421    7944 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:41:49.976704    7944 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:41:50.163467    7944 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:41:50.168979    7944 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:41:50.182632    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:50.243980    7944 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:41:50.246233    7944 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:41:50.246321    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:50.249328    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.284688    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.284688    7944 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:41:50.288341    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.318208    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.318208    7944 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:41:50.318208    7944 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:41:50.318208    7944 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:41:50.322786    7944 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:41:50.580992    7944 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:41:50.580992    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:50.580992    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:50.580992    7944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:41:50.580992    7944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:41:50.581552    7944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:41:50.586113    7944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:41:50.602747    7944 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:41:50.606600    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:41:50.618442    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:41:50.639202    7944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:41:50.660303    7944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1217 00:41:50.686181    7944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:41:50.699393    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:50.841016    7944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:41:50.909095    7944 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:41:50.909095    7944 certs.go:195] generating shared ca certs ...
	I1217 00:41:50.909181    7944 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:41:50.909751    7944 certs.go:257] generating profile certs ...
	I1217 00:41:50.911054    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:41:50.911486    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:41:50.911858    7944 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:41:50.913273    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:41:50.913634    7944 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:41:50.913687    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:41:50.913976    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:41:50.914271    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:41:50.914593    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:41:50.915068    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:41:50.916395    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:41:50.945779    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:41:50.974173    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:41:51.006494    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:41:51.039634    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:41:51.069500    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:41:51.095965    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:41:51.124108    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:41:51.153111    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:41:51.181612    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:41:51.209244    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:41:51.236994    7944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:41:51.261730    7944 ssh_runner.go:195] Run: openssl version
	I1217 00:41:51.280852    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.301978    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:41:51.322912    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.331873    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.336845    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.388885    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:41:51.407531    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.426119    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:41:51.446689    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.455113    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.459541    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.507465    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:41:51.525452    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.543170    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:41:51.560439    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.566853    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.571342    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.621647    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:41:51.639899    7944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:41:51.651440    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:41:51.702199    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:41:51.752106    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:41:51.800819    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:41:51.851441    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:41:51.900439    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:41:51.944312    7944 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:51.948688    7944 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:51.985002    7944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:41:51.998839    7944 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:41:51.998925    7944 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:41:52.003287    7944 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:41:52.016206    7944 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.019955    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:52.077101    7944 kubeconfig.go:125] found "functional-409700" server: "https://127.0.0.1:56622"
	I1217 00:41:52.084213    7944 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:41:52.100216    7944 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:24:17.645837868 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:41:50.679316242 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:41:52.100258    7944 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:41:52.104145    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:52.137767    7944 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:41:52.163943    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:41:52.178186    7944 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:28 /etc/kubernetes/scheduler.conf
	
	I1217 00:41:52.182824    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:41:52.204493    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:41:52.219638    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.223951    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:41:52.243159    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.260005    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.264353    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.281662    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:41:52.297828    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.301928    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:41:52.320845    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:41:52.344713    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:52.568408    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.273580    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.519011    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.597190    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.657031    7944 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:41:53.662643    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.162433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.661965    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.162165    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.162422    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.662001    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.162515    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.662491    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.162857    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.662457    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.161782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.162336    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.662670    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.161692    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.663703    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.163358    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.663185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.161803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.663829    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.166542    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.662220    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.162702    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.662389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.162800    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.662296    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.162770    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.662185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.163484    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.662101    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.163166    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.661850    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.163219    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.662450    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.163350    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.661443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.162140    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.662908    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.162389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.662815    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.162317    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.662985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.161953    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.662582    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.162711    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.662384    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.163213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.662951    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.162863    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.162301    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.664439    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.162163    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.663035    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.163263    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.663152    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.161955    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.663328    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.162424    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.662868    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.162408    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.663167    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.162910    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.662394    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.162371    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.662162    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.161992    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.662354    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.162558    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.663353    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.162056    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.662442    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.162717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.662828    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.162856    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.662970    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.162077    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.662936    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.163640    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.662803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.163131    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.662216    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.162136    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.162086    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.663084    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.161766    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.664543    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.162298    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.662872    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.162985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.663388    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.162888    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.662630    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.163272    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.662830    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.163249    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.163651    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.662883    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.163502    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.162911    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.663838    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.163526    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.663376    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.163496    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.662662    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.163562    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.663717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.163610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.662532    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.163860    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.663359    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.162827    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.663347    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.162765    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.663289    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.163097    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.661774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:53.693561    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.693561    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:53.697663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:53.729976    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.729976    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:53.733954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:53.762808    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.762808    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:53.767775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:53.797017    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.797017    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:53.800693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:53.829028    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.829028    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:53.832681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:53.860730    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.860730    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:53.864375    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:53.893858    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.893858    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:53.893858    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:53.893858    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:53.958662    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:53.958662    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:53.990110    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:53.990110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:54.075886    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:54.075886    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:54.075886    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:54.124100    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:54.124100    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:56.693664    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:56.717550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:56.749444    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.749476    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:56.753285    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:56.784073    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.784073    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:56.788320    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:56.817232    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.817232    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:56.821873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:56.853120    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.853120    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:56.857160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:56.887514    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.887514    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:56.891198    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:56.922568    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.922636    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:56.925831    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:56.954531    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.954531    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:56.954531    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:56.954531    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:57.019098    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:57.019098    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:57.050929    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:57.050955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:57.138578    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:57.138578    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:57.138578    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:57.182851    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:57.182851    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:59.736560    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:59.756547    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:59.785666    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.785666    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:59.789191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:59.818090    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.818151    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:59.821701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:59.849198    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.849198    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:59.852824    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:59.880565    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.880565    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:59.884161    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:59.915009    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.915009    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:59.918550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:59.949230    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.949230    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:59.953371    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:59.979962    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.979962    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:59.979962    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:59.979962    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:00.044543    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:00.044543    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:00.075045    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:00.075045    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:00.184096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:00.184096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:00.184096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:00.229125    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:00.229125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:02.788235    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:02.812066    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:02.844035    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.844035    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:02.847391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:02.879346    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.879346    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:02.883507    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:02.911508    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.911573    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:02.915132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:02.944186    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.944186    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:02.948177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:02.977489    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.977489    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:02.980961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:03.009657    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.009657    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:03.013587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:03.042816    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.042816    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:03.042816    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:03.042816    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:03.126456    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:03.126456    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:03.126456    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:03.167566    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:03.167566    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:03.219094    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:03.219094    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:03.285299    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:03.285299    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:05.820619    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:05.845854    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:05.875867    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.875867    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:05.879229    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:05.909558    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.909558    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:05.912556    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:05.942200    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.942273    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:05.945627    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:05.975289    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.975289    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:05.979052    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:06.009570    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.009570    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:06.013210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:06.042977    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.042977    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:06.046640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:06.075849    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.075849    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:06.075849    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:06.075849    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:06.120266    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:06.120266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:06.168821    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:06.168821    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:06.230879    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:06.230879    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:06.260885    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:06.260885    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:06.340031    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:08.845285    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:08.868682    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:08.897291    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.897291    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:08.900871    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:08.928001    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.928001    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:08.931488    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:08.961792    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.961792    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:08.965426    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:08.994180    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.994253    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:08.997983    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:09.026539    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.026539    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:09.030228    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:09.061065    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.061094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:09.064483    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:09.093815    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.093815    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:09.093815    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:09.093815    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:09.173989    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:09.174037    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:09.174037    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:09.214846    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:09.214846    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:09.269685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:09.269685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:09.331802    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:09.331802    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:11.869149    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:11.892656    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:11.921635    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.921635    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:11.926449    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:11.957938    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.957938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:11.961505    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:11.991894    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.991894    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:11.995992    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:12.025039    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.025039    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:12.029016    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:12.060459    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.060459    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:12.064652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:12.096164    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.096164    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:12.100038    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:12.129762    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.129824    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:12.129824    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:12.129824    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:12.194950    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:12.194950    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:12.227435    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:12.227435    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:12.311750    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:12.311750    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:12.311750    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:12.352387    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:12.352387    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:14.907650    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:14.933011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:14.961340    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.961340    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:14.964869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:14.991179    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.991179    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:14.996502    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:15.025325    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.025325    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:15.031024    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:15.058452    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.058452    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:15.062691    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:15.091232    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.091232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:15.096528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:15.127551    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.127551    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:15.131605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:15.161113    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.161113    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:15.161113    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:15.161113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:15.189644    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:15.189644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:15.270306    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:15.270306    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:15.270306    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:15.311714    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:15.311714    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:15.371391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:15.371391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:17.939209    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:17.962095    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:17.990273    7944 logs.go:282] 0 containers: []
	W1217 00:43:17.990273    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:17.993918    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:18.025229    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.025229    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:18.029538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:18.060092    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.060092    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:18.064444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:18.095199    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.095230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:18.098808    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:18.129658    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.129658    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:18.133236    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:18.163628    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.163628    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:18.167493    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:18.199253    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.199253    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:18.199253    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:18.199253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:18.252203    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:18.252203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:18.316097    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:18.316097    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:18.347393    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:18.347393    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:18.426495    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:18.426495    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:18.426495    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:20.972950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:20.998624    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:21.025837    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.025837    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:21.029315    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:21.061085    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.061085    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:21.065387    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:21.092871    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.092871    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:21.096706    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:21.126179    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.126179    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:21.129834    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:21.159720    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.159720    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:21.163263    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:21.193011    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.193011    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:21.196667    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:21.229222    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.229222    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:21.229222    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:21.229222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:21.279391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:21.279391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:21.341649    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:21.341649    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:21.372055    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:21.372055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:21.451011    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:21.451011    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:21.451011    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.011538    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:24.037171    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:24.067520    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.067544    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:24.070755    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:24.101421    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.101454    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:24.104927    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:24.133336    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.133336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:24.137178    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:24.164662    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.164662    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:24.168324    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:24.200218    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.200218    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:24.203764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:24.234603    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.234603    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:24.238011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:24.267400    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.267400    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:24.267400    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:24.267400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:24.348263    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:24.348263    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:24.348263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.393298    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:24.393298    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:24.446709    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:24.446709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:24.518891    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:24.518891    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.054877    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:27.078747    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:27.111142    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.111142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:27.114844    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:27.143801    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.143801    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:27.147663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:27.176215    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.176215    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:27.179758    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:27.208587    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.208587    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:27.211873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:27.241061    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.241061    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:27.244905    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:27.276011    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.276065    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:27.279281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:27.309068    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.309068    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:27.309068    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:27.309068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:27.372079    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:27.372079    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.403215    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:27.403215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:27.502209    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:27.502209    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:27.502209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:27.543251    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:27.543251    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.103213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:30.126929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:30.158148    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.158148    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:30.162286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:30.191927    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.191927    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:30.195748    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:30.225040    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.225040    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:30.229444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:30.260498    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.260498    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:30.264750    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:30.293312    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.293312    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:30.296869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:30.325167    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.325167    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:30.328938    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:30.363267    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.363267    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:30.363267    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:30.363267    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:30.393795    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:30.393795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:30.487446    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:30.487446    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:30.487446    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:30.530226    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:30.530226    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.585635    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:30.585635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:33.151438    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:33.175766    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:33.207203    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.207203    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:33.210965    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:33.237795    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.237795    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:33.242087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:33.273041    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.273041    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:33.277103    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:33.305283    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.305283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:33.309730    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:33.337737    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.337737    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:33.341408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:33.370694    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.370694    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:33.374111    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:33.407836    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.407836    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:33.407836    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:33.407836    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:33.434955    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:33.434955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:33.529365    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:33.529365    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:33.529365    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:33.572145    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:33.572145    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:33.624502    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:33.624502    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.189426    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:36.213378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:36.243407    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.243407    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:36.246746    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:36.274995    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.274995    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:36.278271    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:36.305533    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.305533    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:36.309459    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:36.338892    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.338892    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:36.342669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:36.373516    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.373516    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:36.377003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:36.404831    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.404831    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:36.408515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:36.437790    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.437790    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:36.437790    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:36.437790    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:36.540076    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:36.540076    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:36.540076    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:36.580664    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:36.580664    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:36.635234    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:36.635234    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.695702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:36.695702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.230926    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:39.255012    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:39.288661    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.288661    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:39.293143    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:39.320903    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.320967    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:39.324725    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:39.350161    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.350161    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:39.353696    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:39.380073    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.380073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:39.383515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:39.411510    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.411510    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:39.415491    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:39.449683    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.449683    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:39.453620    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:39.487800    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.487800    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:39.487800    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:39.487800    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:39.552943    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:39.552943    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.582035    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:39.583033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:39.660499    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:39.660499    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:39.660499    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:39.705645    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:39.705645    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:42.267731    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:42.297885    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:42.329299    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.329326    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:42.332959    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:42.361173    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.361173    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:42.365107    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:42.393236    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.393236    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:42.397363    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:42.430949    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.430949    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:42.435377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:42.465696    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.465696    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:42.468849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:42.512182    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.512182    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:42.515699    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:42.545680    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.545680    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:42.545680    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:42.545680    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:42.607372    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:42.607372    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:42.637761    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:42.637761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:42.720140    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:42.720140    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:42.720140    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:42.760712    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:42.760712    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.318861    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:45.345331    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:45.376136    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.376136    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:45.379539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:45.408720    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.408720    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:45.412623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:45.444664    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.444664    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:45.448226    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:45.484195    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.484195    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:45.488022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:45.515242    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.515242    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:45.519184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:45.551260    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.551260    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:45.554894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:45.581795    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.581795    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:45.581795    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:45.581795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:45.625880    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:45.625880    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.678280    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:45.678280    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:45.738938    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:45.738938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:45.770054    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:45.770054    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:45.854057    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.359806    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:48.384092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:48.415158    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.415192    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:48.418996    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:48.446149    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.446149    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:48.449676    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:48.487416    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.487416    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:48.491652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:48.520073    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.520073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:48.524101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:48.550421    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.550421    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:48.554497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:48.583643    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.583666    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:48.587154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:48.616812    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.616812    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:48.616812    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:48.616812    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:48.681323    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:48.681323    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:48.712866    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:48.712866    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:48.798447    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.798447    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:48.798447    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:48.839546    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:48.839546    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.393802    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:51.419527    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:51.453783    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.453783    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:51.457619    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:51.496053    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.496053    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:51.499949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:51.528492    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.528492    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:51.531946    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:51.560363    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.560363    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:51.563875    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:51.597143    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.597143    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:51.600764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:51.630459    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.630459    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:51.634473    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:51.667072    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.667072    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:51.667072    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:51.667072    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.719154    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:51.719154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:51.779761    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:51.779761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:51.810036    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:51.810036    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:51.887952    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:51.887952    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:51.887952    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.434243    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:54.457541    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:54.486698    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.486698    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:54.491137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:54.520500    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.520500    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:54.524176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:54.552487    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.552487    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:54.556310    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:54.585424    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.585424    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:54.588683    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:54.619901    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.619970    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:54.623608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:54.655623    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.655706    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:54.658833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:54.690413    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.690413    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:54.690413    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:54.690492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:54.771466    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:54.771466    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:54.771466    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.813307    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:54.813307    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:54.874633    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:54.875154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:54.937630    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:54.937630    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.472782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:57.497186    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:57.526677    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.526745    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:57.530218    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:57.557916    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.557948    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:57.562041    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:57.590924    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.590924    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:57.594569    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:57.621738    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.621738    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:57.627319    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:57.656111    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.656111    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:57.659689    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:57.690217    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.690217    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:57.693915    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:57.723629    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.723629    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:57.723629    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:57.723688    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:57.788129    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:57.788129    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.818809    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:57.818809    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:57.903055    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:57.903055    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:57.903055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:57.944153    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:57.944153    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.501950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:00.530348    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:00.561749    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.562270    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:00.566179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:00.596812    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.596812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:00.600551    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:00.628898    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.628898    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:00.632187    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:00.661210    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.661255    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:00.664477    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:00.692625    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.692625    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:00.696565    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:00.727420    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.727420    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:00.731176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:00.761041    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.761041    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:00.761041    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:00.761041    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.813195    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:00.813286    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:00.875819    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:00.875819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:00.906004    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:00.906004    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:00.995354    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:00.995354    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:00.995354    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:03.542659    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:03.566401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:03.597875    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.597875    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:03.602087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:03.631114    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.631114    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:03.635275    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:03.664437    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.665863    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:03.669211    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:03.697100    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.697100    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:03.701535    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:03.731200    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.731200    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:03.735391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:03.764893    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.764893    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:03.768303    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:03.799245    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.799245    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:03.799245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:03.799245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:03.863068    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:03.863068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:03.892825    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:03.892825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:03.975253    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:03.975253    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:03.975253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:04.016164    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:04.016164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:06.571695    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:06.597029    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:06.627889    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.627889    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:06.631611    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:06.661118    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.661118    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:06.664736    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:06.694336    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.694336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:06.698523    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:06.728693    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.728693    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:06.732767    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:06.762060    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.762130    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:06.765313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:06.795222    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.795222    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:06.799233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:06.829491    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.829525    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:06.829525    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:06.829558    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:06.858476    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:06.858476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:06.938014    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:06.938014    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:06.938014    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:06.978960    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:06.978960    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:07.027942    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:07.027942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:09.595591    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:09.619202    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:09.648727    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.648727    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:09.653265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:09.684682    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.684682    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:09.688140    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:09.715249    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.715249    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:09.718566    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:09.749969    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.749969    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:09.753003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:09.779832    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.779832    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:09.783608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:09.812286    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.812326    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:09.816849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:09.845801    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.845801    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:09.845801    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:09.845801    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:09.890276    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:09.891278    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:09.945030    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:09.945030    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:10.007215    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:10.007215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:10.037318    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:10.037318    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:10.122162    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.627660    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:12.651516    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:12.684952    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.684952    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:12.688749    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:12.717327    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.717327    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:12.721146    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:12.749548    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.749548    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:12.752616    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:12.784015    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.784015    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:12.787596    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:12.817388    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.817388    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:12.821554    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:12.849737    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.849737    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:12.853589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:12.882735    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.882735    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:12.882735    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:12.882735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:12.966389    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.966389    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:12.966389    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:13.009759    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:13.009759    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:13.057767    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:13.057767    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:13.121685    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:13.121685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:15.659014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:15.683463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:15.714834    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.714857    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:15.718351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:15.749782    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.749812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:15.753368    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:15.782321    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.782321    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:15.785961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:15.816416    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.816416    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:15.822152    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:15.848733    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.848791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:15.852246    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:15.881272    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.881310    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:15.886378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:15.917818    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.917818    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:15.917892    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:15.917892    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:15.983033    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:15.983033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:16.015133    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:16.015133    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:16.105395    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:16.105395    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:16.105438    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:16.146209    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:16.146209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:18.701433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:18.725475    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:18.759149    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.759149    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:18.762892    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:18.795437    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.795437    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:18.799127    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:18.835050    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.835580    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:18.839967    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:18.867222    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.867222    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:18.870583    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:18.899263    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.899263    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:18.902802    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:18.934115    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.934115    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:18.937420    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:18.969205    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.969205    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:18.969205    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:18.969205    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:19.030841    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:19.030841    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:19.061419    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:19.061938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:19.143852    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:19.143852    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:19.143852    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:19.187635    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:19.187709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:21.747174    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:21.771176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:21.800995    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.800995    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:21.804142    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:21.836064    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.836131    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:21.839865    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:21.868223    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.868292    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:21.871954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:21.900714    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.900714    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:21.904281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:21.931611    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.931611    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:21.935666    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:21.963188    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.963188    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:21.967538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:21.994527    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.994527    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:21.994527    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:21.994527    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:22.061635    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:22.061635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:22.093213    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:22.093213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:22.179644    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:22.179644    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:22.179644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:22.223092    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:22.223092    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:24.783065    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:24.806396    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:24.838512    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.838512    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:24.842023    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:24.871052    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.871052    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:24.874639    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:24.903466    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.903466    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:24.906973    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:24.938000    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.938000    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:24.942149    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:24.970337    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.970371    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:24.973308    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:25.003460    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.003460    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:25.007008    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:25.035638    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.035638    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:25.035638    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:25.035638    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:25.097833    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:25.097833    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:25.128758    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:25.128758    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:25.209843    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:25.209843    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:25.209843    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:25.250600    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:25.250600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:27.806610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:27.831257    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:27.864142    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.864142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:27.867995    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:27.897561    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.897561    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:27.900925    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:27.931079    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.931079    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:27.934151    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:27.964321    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.964321    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:27.969534    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:27.999709    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.999709    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:28.002966    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:28.034961    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.035008    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:28.038649    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:28.067733    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.067733    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:28.067733    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:28.067733    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:28.150573    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:28.150573    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:28.150573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:28.192203    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:28.192203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:28.248534    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:28.248624    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:28.306585    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:28.306585    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:30.842138    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:30.867340    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:30.899142    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.899142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:30.903037    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:30.932057    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.932057    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:30.938184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:30.965554    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.965554    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:30.969154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:30.997999    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.997999    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:31.001861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:31.031079    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.031142    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:31.034735    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:31.063582    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.063582    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:31.069235    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:31.098869    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.098948    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:31.098948    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:31.098948    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:31.127253    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:31.127253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:31.211541    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:31.211541    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:31.211541    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:31.258478    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:31.258478    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:31.308932    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:31.308932    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:33.876600    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:33.899781    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:33.930969    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.930969    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:33.934621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:33.964938    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.964938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:33.968775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:33.998741    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.998793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:34.002265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:34.030279    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.030279    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:34.034177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:34.063244    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.063244    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:34.066512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:34.095842    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.095842    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:34.099843    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:34.133173    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.133173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:34.133173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:34.133173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:34.198297    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:34.198297    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:34.229134    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:34.229134    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:34.305327    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:34.305327    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:34.305327    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:34.346912    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:34.346912    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:36.903423    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:36.929005    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:36.959255    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.959255    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:36.962841    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:36.991016    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.991016    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:36.995294    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:37.027615    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.027615    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:37.031225    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:37.063793    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.063793    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:37.067539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:37.098257    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.098257    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:37.104945    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:37.135094    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.135094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:37.139494    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:37.170825    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.170825    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:37.170825    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:37.170825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:37.236025    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:37.236025    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:37.266143    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:37.266143    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:37.356401    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:37.356401    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:37.356401    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:37.397010    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:37.397010    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:39.951831    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:39.975669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:40.007629    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.007629    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:40.011435    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:40.041534    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.041534    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:40.045543    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:40.072927    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.072927    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:40.076835    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:40.104604    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.104604    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:40.108678    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:40.136644    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.136644    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:40.140732    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:40.172579    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.172579    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:40.176191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:40.207078    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.207078    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:40.207078    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:40.207171    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:40.271921    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:40.271921    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:40.302650    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:40.302650    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:40.384552    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:40.384552    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:40.384552    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:40.425377    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:40.425377    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:42.980281    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:43.003860    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:43.036168    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.036168    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:43.040136    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:43.068891    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.068891    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:43.072976    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:43.103823    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.103823    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:43.107774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:43.134339    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.134339    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:43.137929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:43.168166    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.168166    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:43.172279    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:43.200333    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.200333    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:43.204183    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:43.236225    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.236225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:43.236225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:43.236225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:43.280577    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:43.280577    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:43.331604    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:43.331604    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:43.392357    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:43.392357    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:43.423125    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:43.423125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:43.508115    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.013886    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:46.042290    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:46.074707    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.074707    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:46.078216    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:46.109309    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.109309    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:46.112661    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:46.141002    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.141002    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:46.144585    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:46.172550    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.172550    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:46.178681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:46.209054    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.209054    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:46.212761    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:46.242212    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.242212    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:46.245894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:46.273677    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.273677    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:46.273719    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:46.273719    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:46.339840    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:46.339840    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:46.373287    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:46.373287    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:46.452686    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.452686    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:46.452686    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:46.498608    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:46.498608    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.050761    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:49.075428    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:49.105673    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.105673    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:49.109924    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:49.140245    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.140245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:49.143980    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:49.175115    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.175115    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:49.181267    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:49.213667    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.213667    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:49.217486    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:49.249277    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.249277    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:49.252880    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:49.279244    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.279287    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:49.282893    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:49.313826    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.313826    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:49.313826    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:49.313826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:49.395270    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:49.395270    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:49.395270    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:49.439990    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:49.439990    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.493048    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:49.493048    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:49.555675    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:49.555675    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.091191    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:52.121154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:52.152807    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.152807    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:52.157047    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:52.185793    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.185793    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:52.188792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:52.217804    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.218793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:52.221792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:52.253749    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.253749    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:52.257528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:52.286783    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.286783    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:52.290341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:52.319799    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.319799    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:52.323376    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:52.351656    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.351656    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:52.351656    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:52.351656    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:52.395381    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:52.395381    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:52.449049    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:52.449049    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:52.511942    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:52.511942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.541707    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:52.541707    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:52.622537    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.130052    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:55.154497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:55.185053    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.185086    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:55.188968    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:55.215935    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.215935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:55.220385    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:55.249124    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.249159    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:55.253058    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:55.282148    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.282230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:55.285701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:55.315081    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.315081    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:55.320240    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:55.350419    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.350449    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:55.353993    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:55.386346    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.386346    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:55.386346    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:55.386346    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:55.463518    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.463518    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:55.463518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:55.502884    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:55.502884    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:55.567300    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:55.567300    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:55.630547    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:55.630547    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.165717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:58.189522    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:58.223415    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.223415    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:58.227138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:58.256133    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.256133    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:58.259919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:58.289751    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.289751    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:58.293341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:58.323835    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.323835    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:58.327981    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:58.358897    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.358897    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:58.362525    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:58.393696    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.393696    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:58.397786    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:58.426810    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.426810    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:58.426810    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:58.426810    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:58.492668    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:58.492668    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.523854    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:58.523854    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:58.609164    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:58.609164    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:58.609164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:58.654356    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:58.654356    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.211859    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:01.236949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:01.268645    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.268645    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:01.273856    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:01.305336    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.305336    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:01.309133    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:01.339056    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.339056    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:01.343432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:01.373802    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.373802    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:01.378587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:01.408624    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.408624    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:01.414210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:01.446499    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.446499    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:01.450189    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:01.479782    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.479782    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:01.479782    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:01.479829    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.526819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:01.526819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:01.591797    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:01.591797    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:01.624206    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:01.624206    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:01.713187    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:01.713187    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:01.713187    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.261443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:04.286201    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:04.315610    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.315610    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:04.319607    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:04.348007    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.348007    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:04.351825    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:04.378854    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.378854    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:04.382430    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:04.414385    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.414385    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:04.419751    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:04.447734    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.447734    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:04.452650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:04.483414    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.483414    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:04.488519    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:04.520173    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.520173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:04.520173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:04.520173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:04.583573    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:04.583573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:04.615102    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:04.615102    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:04.703186    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:04.703186    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:04.703186    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.745696    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:04.745696    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:07.302305    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:07.327138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:07.357072    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.357072    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:07.361245    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:07.393135    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.393135    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:07.397020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:07.426598    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.426623    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:07.430259    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:07.459216    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.459216    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:07.463233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:07.491206    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.491206    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:07.496432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:07.527082    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.527082    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:07.530080    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:07.563609    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.563609    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:07.563609    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:07.563609    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:07.624175    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:07.624175    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:07.654046    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:07.655373    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:07.733760    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:07.733760    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:07.733760    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:07.775826    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:07.775826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.333009    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:10.359433    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:10.394281    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.394281    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:10.399772    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:10.431921    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.431921    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:10.435941    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:10.466929    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.466929    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:10.469952    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:10.500979    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.500979    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:10.504132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:10.532972    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.532972    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:10.536526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:10.565609    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.565609    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:10.569307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:10.597263    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.597263    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:10.597263    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:10.597263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:10.625496    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:10.625496    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:10.716452    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:10.716452    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:10.716535    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:10.757898    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:10.757898    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.807685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:10.807685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.376757    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:13.401022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:13.433179    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.433179    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:13.438943    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:13.466315    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.466315    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:13.469406    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:13.498170    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.498170    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:13.503463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:13.531045    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.531045    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:13.534623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:13.563549    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.563572    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:13.567173    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:13.595412    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.595412    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:13.599138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:13.627347    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.627347    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:13.627347    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:13.627347    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.687440    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:13.688440    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:13.718641    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:13.718785    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:13.801949    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:13.801949    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:13.801949    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:13.846773    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:13.847288    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.401019    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:16.426837    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:16.461985    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.461985    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:16.465693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:16.494330    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.494354    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:16.497490    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:16.527742    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.527742    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:16.531287    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:16.561095    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.561095    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:16.564902    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:16.594173    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.594173    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:16.597642    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:16.627598    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.627598    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:16.630884    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:16.659950    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.660031    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:16.660031    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:16.660031    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:16.740660    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:16.740692    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:16.740692    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:16.782319    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:16.782319    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.835245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:16.835245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:16.900147    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:16.900147    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.437638    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:19.462468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:19.493244    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.493244    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:19.497367    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:19.526430    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.526430    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:19.530589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:19.559166    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.559222    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:19.562429    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:19.594311    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.594311    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:19.597936    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:19.627339    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.627339    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:19.632033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:19.659648    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.659648    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:19.663351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:19.696628    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.696628    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:19.696628    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:19.696628    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:19.749701    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:19.749701    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:19.809018    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:19.809018    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.838771    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:19.838771    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:19.921290    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:19.921290    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:19.921290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.468833    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:22.494625    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:22.526034    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.526034    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:22.529623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:22.565289    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.565289    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:22.569286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:22.597280    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.597280    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:22.601010    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:22.630330    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.630330    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:22.634511    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:22.663939    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.663939    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:22.667575    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:22.696762    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.696792    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:22.700137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:22.732285    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.732285    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:22.732285    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:22.732285    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:22.814702    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:22.814702    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:22.814702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.864515    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:22.864515    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:22.917896    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:22.917896    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:22.984213    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:22.984213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.517090    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:25.542531    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:25.575294    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.575294    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:25.579526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:25.610041    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.610041    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:25.614160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:25.643682    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.643712    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:25.647264    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:25.679557    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.679557    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:25.685184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:25.712791    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.712791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:25.716775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:25.747803    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.747803    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:25.751621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:25.782130    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.782130    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:25.782130    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:25.782130    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:25.833735    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:25.833735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:25.894476    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:25.894476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.925218    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:25.925218    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:26.009195    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:26.009195    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:26.009195    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.558504    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:28.581900    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:28.615041    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.615041    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:28.619020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:28.647386    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.647386    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:28.651512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:28.679029    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.679029    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:28.682977    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:28.714035    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.714035    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:28.717407    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:28.746896    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.746920    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:28.749895    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:28.782541    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.782574    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:28.786249    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:28.813250    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.813250    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:28.813250    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:28.813250    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:28.891492    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:28.891492    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:28.891492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.934039    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:28.934039    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:28.986066    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:28.986066    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:29.044402    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:29.045400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.579014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:31.605723    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:31.639437    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.639437    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:31.643001    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:31.672858    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.672858    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:31.676418    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:31.706815    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.706815    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:31.711450    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:31.739165    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.739165    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:31.742794    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:31.774213    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.774213    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:31.778092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:31.808021    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.808021    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:31.811911    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:31.841111    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.841174    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:31.841207    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:31.841207    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:31.903600    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:31.903600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.934979    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:31.934979    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:32.016581    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:32.016581    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:32.016581    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:32.059137    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:32.059137    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:34.619048    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:34.642906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:34.676541    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.676541    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:34.680839    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:34.710245    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.710245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:34.715809    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:34.754209    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.754227    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:34.757792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:34.787283    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.787283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:34.790335    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:34.823758    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.823758    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:34.827394    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:34.856153    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.856153    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:34.859978    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:34.890024    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.890024    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:34.890024    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:34.890024    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:34.954222    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:34.954222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:34.985196    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:34.985196    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:35.067666    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:35.067666    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:35.067666    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:35.109711    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:35.109711    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:37.664972    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:37.687969    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:37.717956    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.717956    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:37.721553    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:37.750935    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.750935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:37.755377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:37.786480    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.786480    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:37.790806    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:37.821246    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.821246    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:37.825408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:37.854559    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.854559    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:37.858605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:37.888189    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.888189    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:37.892436    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:37.923454    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.923454    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:37.923454    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:37.923454    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:37.990022    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:37.990022    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:38.021197    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:38.021197    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:38.107061    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:38.107061    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:38.107061    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:38.150052    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:38.150052    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:40.710598    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:40.738050    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:40.769637    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.769637    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:40.773468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:40.810478    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.810478    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:40.814079    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:40.848071    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.848071    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:40.851868    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:40.880725    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.880725    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:40.884928    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:40.915221    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.915221    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:40.919101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:40.951097    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.951097    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:40.955307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:40.990856    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.990901    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:40.990901    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:40.990901    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:41.041987    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:41.042028    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:41.104560    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:41.104560    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:41.134782    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:41.134782    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:41.221096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:41.221096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:41.221096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:43.768841    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:43.807393    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:43.840153    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.840153    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:43.843740    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:43.873589    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.873589    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:43.877086    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:43.906593    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.906593    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:43.910563    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:43.940004    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.940004    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:43.944461    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:43.984818    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.984818    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:43.988580    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:44.016481    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.016481    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:44.020610    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:44.050198    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.050225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:44.050225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:44.050225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:44.096362    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:44.096362    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:44.150219    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:44.150219    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:44.209135    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:44.209135    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:44.240518    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:44.240518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:44.328383    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:46.833977    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:46.856919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:46.889480    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.889480    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:46.893215    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:46.924373    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.924373    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:46.928774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:46.961004    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.961004    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:46.964726    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:47.003673    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.003673    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:47.006719    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:47.040232    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.040232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:47.044112    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:47.074796    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.074796    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:47.078313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:47.109819    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.109819    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:47.109819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:47.109819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:47.173702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:47.174703    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:47.204290    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:47.204290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:47.290268    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:47.290268    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:47.290268    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:47.332308    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:47.332308    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:49.890367    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:49.913613    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:49.943685    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.943685    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:49.947685    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:49.975458    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.975458    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:49.979401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:50.010709    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.010709    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:50.014179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:50.046146    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.046146    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:50.050033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:50.082525    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.082525    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:50.085833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:50.113901    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.113943    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:50.117783    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:50.148202    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.148290    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:50.148290    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:50.148290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:50.208056    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:50.208056    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:50.239113    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:50.239113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:50.326281    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:50.326281    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:50.326281    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:50.369080    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:50.369080    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:52.932111    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:52.956351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:52.989854    7944 logs.go:282] 0 containers: []
	W1217 00:45:52.989854    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:52.995118    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:53.022557    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.022557    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:53.027906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:53.062035    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.062035    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:53.065640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:53.096245    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.096245    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:53.100861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:53.131945    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.131945    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:53.135650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:53.164825    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.164825    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:53.168602    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:53.198961    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.198961    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:53.198961    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:53.198961    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:53.260266    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:53.260266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:53.290682    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:53.290682    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:53.375669    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:53.375669    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:53.375669    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:53.416110    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:53.416110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:55.971979    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:55.991052    7944 kubeadm.go:602] duration metric: took 4m3.9896216s to restartPrimaryControlPlane
	W1217 00:45:55.991052    7944 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:45:55.996485    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:45:56.479923    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:45:56.502762    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:45:56.518662    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:45:56.523597    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:45:56.536371    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:45:56.536371    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:45:56.541198    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:45:56.554668    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:45:56.559154    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:45:56.576197    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:45:56.590283    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:45:56.594634    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:45:56.612520    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.626118    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:45:56.631259    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.648494    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:45:56.661811    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:45:56.665826    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:45:56.684539    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:45:56.809159    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:45:56.895277    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:45:56.990840    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:49:57.581295    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:49:57.581442    7944 kubeadm.go:319] 
	I1217 00:49:57.581498    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:49:57.586513    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:49:57.586513    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:49:57.587141    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:49:57.587141    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:49:57.587666    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:49:57.588407    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:49:57.589479    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:49:57.589618    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:49:57.589771    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:49:57.589895    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:49:57.589957    7944 kubeadm.go:319] OS: Linux
	I1217 00:49:57.590117    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:49:57.590205    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:49:57.590849    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:49:57.591066    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:49:57.591250    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:49:57.591469    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:49:57.591654    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:49:57.594374    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:49:57.598936    7944 out.go:252]   - Booting up control plane ...
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:49:57.599930    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001130665s
	I1217 00:49:57.599930    7944 kubeadm.go:319] 
	I1217 00:49:57.599930    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:49:57.599930    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:49:57.600944    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:49:57.600944    7944 kubeadm.go:319] 
	I1217 00:49:57.601093    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 
	W1217 00:49:57.601093    7944 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001130665s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:49:57.606482    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:49:58.061133    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:49:58.080059    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:49:58.085171    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:49:58.098234    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:49:58.098234    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:49:58.102655    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:49:58.116544    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:49:58.121754    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:49:58.141782    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:49:58.155836    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:49:58.159790    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:49:58.177864    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.192169    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:49:58.196436    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.213653    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:49:58.227417    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:49:58.231893    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:49:58.251588    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:49:58.366677    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:49:58.451159    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:49:58.548545    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:53:59.244804    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:53:59.244874    7944 kubeadm.go:319] 
	I1217 00:53:59.245013    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:53:59.252131    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:53:59.252131    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:53:59.253316    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:53:59.253422    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:53:59.255258    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:53:59.255381    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:53:59.255513    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:53:59.255633    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:53:59.255694    7944 kubeadm.go:319] OS: Linux
	I1217 00:53:59.255790    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:53:59.255877    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:53:59.255998    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:53:59.256094    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:53:59.256215    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:53:59.256364    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:53:59.256426    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:53:59.256548    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:53:59.256670    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:53:59.256888    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:53:59.257050    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:53:59.257070    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:53:59.257070    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:53:59.272325    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:53:59.272325    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:53:59.273353    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:53:59.273480    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:53:59.273606    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:53:59.273733    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:53:59.273865    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:53:59.274182    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:53:59.274309    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:53:59.274434    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:53:59.274560    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:53:59.274685    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:53:59.274812    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:53:59.274938    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:53:59.275063    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:53:59.277866    7944 out.go:252]   - Booting up control plane ...
	I1217 00:53:59.277866    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:53:59.279865    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:53:59.280054    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:53:59.280189    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000873338s
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:53:59.280712    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 
	I1217 00:53:59.280785    7944 kubeadm.go:403] duration metric: took 12m7.3287248s to StartCluster
	I1217 00:53:59.280785    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:59.285017    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:59.529112    7944 cri.go:89] found id: ""
	I1217 00:53:59.529112    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.529112    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:59.529112    7944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:53:59.533754    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:59.574863    7944 cri.go:89] found id: ""
	I1217 00:53:59.574863    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.574863    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:53:59.574863    7944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:53:59.579181    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:59.620688    7944 cri.go:89] found id: ""
	I1217 00:53:59.620688    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.620688    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:53:59.620688    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:59.627987    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:59.676059    7944 cri.go:89] found id: ""
	I1217 00:53:59.676114    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.676114    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:59.676114    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:59.680719    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:59.723707    7944 cri.go:89] found id: ""
	I1217 00:53:59.723707    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.723707    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:59.723707    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:59.729555    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:59.774476    7944 cri.go:89] found id: ""
	I1217 00:53:59.774476    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.774560    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:59.774560    7944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:59.780477    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:59.820909    7944 cri.go:89] found id: ""
	I1217 00:53:59.820909    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.820909    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:59.820909    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:59.820909    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:59.893583    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:59.893583    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:59.926154    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:59.926154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.179462    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.179462    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:54:00.179462    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:54:00.221875    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.221875    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 00:54:00.281055    7944 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:54:00.281122    7944 out.go:285] * 
	W1217 00:54:00.281210    7944 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.281448    7944 out.go:285] * 
	W1217 00:54:00.283315    7944 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:54:00.296133    7944 out.go:203] 
	W1217 00:54:00.298699    7944 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.299289    7944 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:54:00.299350    7944 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:54:00.301526    7944 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799347277Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799352978Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799377780Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799412283Z" level=info msg="Initializing buildkit"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.911073637Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918044834Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918252552Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918284354Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918293455Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:48 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:41:49 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:41:49 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:56.625626   41960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.626623   41960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.627437   41960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.630495   41960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:56.631827   41960 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:41] CPU: 8 PID: 65919 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000795] RIP: 0033:0x7fc513f26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fc513f26af6.
	[  +0.000661] RSP: 002b:00007ffce9a430e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000957] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000792] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000787] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001172] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001257] FS:  0000000000000000 GS:  0000000000000000
	[  +0.952455] CPU: 6 PID: 66046 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000828] RIP: 0033:0x7f7de767eb20
	[  +0.000402] Code: Unable to access opcode bytes at RIP 0x7f7de767eaf6.
	[  +0.000691] RSP: 002b:00007ffdccfc39b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000866] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000810] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001071] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001218] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001105] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001100] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:54:56 up  1:14,  0 user,  load average: 0.32, 0.36, 0.44
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:54:53 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:54:54 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 393.
	Dec 17 00:54:54 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:54 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:54 functional-409700 kubelet[41816]: E1217 00:54:54.473532   41816 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:54:54 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:54:54 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:54:55 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 394.
	Dec 17 00:54:55 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:55 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:55 functional-409700 kubelet[41843]: E1217 00:54:55.187347   41843 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:54:55 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:54:55 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:54:55 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 395.
	Dec 17 00:54:55 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:55 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:55 functional-409700 kubelet[41869]: E1217 00:54:55.944474   41869 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:54:55 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:54:55 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:54:56 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 396.
	Dec 17 00:54:56 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:56 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:54:56 functional-409700 kubelet[41969]: E1217 00:54:56.693794   41969 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:54:56 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:54:56 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (585.5558ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/ComponentHealth (54.41s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-409700 apply -f testdata\invalidsvc.yaml
functional_test.go:2326: (dbg) Non-zero exit: kubectl --context functional-409700 apply -f testdata\invalidsvc.yaml: exit status 1 (20.1946221s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\invalidsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:56622/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test.go:2328: kubectl --context functional-409700 apply -f testdata\invalidsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/InvalidService (20.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (5.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 status
functional_test.go:869: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 status: exit status 2 (587.7388ms)

                                                
                                                
-- stdout --
	functional-409700
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
functional_test.go:871: failed to run minikube status. args "out/minikube-windows-amd64.exe -p functional-409700 status" : exit status 2
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:875: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: exit status 2 (582.9328ms)

                                                
                                                
-- stdout --
	host:Running,kublet:Stopped,apiserver:Stopped,kubeconfig:Configured

                                                
                                                
-- /stdout --
functional_test.go:877: failed to run minikube status with custom format: args "out/minikube-windows-amd64.exe -p functional-409700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}": exit status 2
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 status -o json
functional_test.go:887: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 status -o json: exit status 2 (599.9386ms)

                                                
                                                
-- stdout --
	{"Name":"functional-409700","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
functional_test.go:889: failed to run minikube status with json output. args "out/minikube-windows-amd64.exe -p functional-409700 status -o json" : exit status 2
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (589.6322ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.253295s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                                                 ARGS                                                                                                  │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config     │ functional-409700 config get cpus                                                                                                                                                                     │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh cat /etc/hostname                                                                                                                                                               │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ service    │ functional-409700 service list -o json                                                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ config     │ functional-409700 config unset cpus                                                                                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ config     │ functional-409700 config get cpus                                                                                                                                                                     │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cp         │ functional-409700 cp functional-409700:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2573441544\001\cp-test.txt │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ tunnel     │ functional-409700 tunnel --alsologtostderr                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ tunnel     │ functional-409700 tunnel --alsologtostderr                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ service    │ functional-409700 service --namespace=default --https --url hello-node                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ ssh        │ functional-409700 ssh -n functional-409700 sudo cat /home/docker/cp-test.txt                                                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ tunnel     │ functional-409700 tunnel --alsologtostderr                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ service    │ functional-409700 service hello-node --url --format={{.IP}}                                                                                                                                           │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cp         │ functional-409700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ service    │ functional-409700 service hello-node --url                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ ssh        │ functional-409700 ssh -n functional-409700 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ addons     │ functional-409700 addons list                                                                                                                                                                         │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ addons     │ functional-409700 addons list -o json                                                                                                                                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/4168.pem                                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /usr/share/ca-certificates/4168.pem                                                                                                                                    │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/41682.pem                                                                                                                                               │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /usr/share/ca-certificates/41682.pem                                                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ docker-env │ functional-409700 docker-env                                                                                                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/test/nested/copy/4168/hosts                                                                                                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:41:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:41:42.742737    7944 out.go:360] Setting OutFile to fd 1692 ...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.785452    7944 out.go:374] Setting ErrFile to fd 2032...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.823093    7944 out.go:368] Setting JSON to false
	I1217 00:41:42.826928    7944 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3691,"bootTime":1765928411,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:41:42.827062    7944 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:41:42.832423    7944 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:41:42.834008    7944 notify.go:221] Checking for updates...
	I1217 00:41:42.836028    7944 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:41:42.837747    7944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:41:42.839400    7944 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:41:42.841743    7944 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:41:42.843853    7944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:41:42.846824    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:42.847138    7944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:41:43.032802    7944 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:41:43.036200    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.287623    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.26443223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.290624    7944 out.go:179] * Using the docker driver based on existing profile
	I1217 00:41:43.295624    7944 start.go:309] selected driver: docker
	I1217 00:41:43.295624    7944 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.295624    7944 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:41:43.302622    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.528811    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.511883839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.567003    7944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:41:43.567003    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:43.567003    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:43.567003    7944 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.571110    7944 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:41:43.575004    7944 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:41:43.577924    7944 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:41:43.581930    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:43.581930    7944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:41:43.581930    7944 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:41:43.581930    7944 cache.go:65] Caching tarball of preloaded images
	I1217 00:41:43.582517    7944 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:41:43.582517    7944 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:41:43.582517    7944 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:41:43.660928    7944 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:41:43.660928    7944 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:41:43.660928    7944 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:41:43.660928    7944 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:41:43.660928    7944 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:41:43.660928    7944 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:41:43.660928    7944 fix.go:54] fixHost starting: 
	I1217 00:41:43.667914    7944 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:41:43.723914    7944 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:41:43.723914    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:41:43.726919    7944 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:41:43.726919    7944 machine.go:94] provisionDockerMachine start ...
	I1217 00:41:43.731914    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:43.796916    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:43.796916    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:43.796916    7944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:41:43.969131    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:43.969131    7944 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:41:43.975058    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.033428    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.033980    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.033980    7944 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:41:44.218389    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:44.221624    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.281826    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.282333    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.282333    7944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:41:44.449024    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:44.449024    7944 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:41:44.449024    7944 ubuntu.go:190] setting up certificates
	I1217 00:41:44.449024    7944 provision.go:84] configureAuth start
	I1217 00:41:44.452071    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:44.516121    7944 provision.go:143] copyHostCerts
	I1217 00:41:44.516430    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:41:44.516430    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:41:44.516430    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:41:44.517399    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:41:44.517399    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:41:44.517399    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:41:44.518364    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:41:44.518364    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:41:44.518364    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:41:44.519103    7944 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:41:44.613354    7944 provision.go:177] copyRemoteCerts
	I1217 00:41:44.617354    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:41:44.620354    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.676405    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:44.805633    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:41:44.840310    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:41:44.872497    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:41:44.899304    7944 provision.go:87] duration metric: took 450.2424ms to configureAuth
	I1217 00:41:44.899304    7944 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:41:44.899304    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:44.902693    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.962192    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.962661    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.962688    7944 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:41:45.129265    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:41:45.129265    7944 ubuntu.go:71] root file system type: overlay
	I1217 00:41:45.129265    7944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:41:45.133980    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.191141    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.191583    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.191676    7944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:41:45.381081    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:41:45.384910    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.439634    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.439634    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.439634    7944 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:41:45.639837    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:45.639837    7944 machine.go:97] duration metric: took 1.9128981s to provisionDockerMachine
	I1217 00:41:45.639837    7944 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:41:45.639837    7944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:41:45.643968    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:41:45.647579    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.702256    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:45.830302    7944 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:41:45.840912    7944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:41:45.840912    7944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:41:45.841469    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:41:45.842433    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:41:45.846605    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:41:45.861850    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:41:45.894051    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:41:45.924540    7944 start.go:296] duration metric: took 284.7004ms for postStartSetup
	I1217 00:41:45.929030    7944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:41:45.931390    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.988238    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.118181    7944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:41:46.128256    7944 fix.go:56] duration metric: took 2.4673029s for fixHost
	I1217 00:41:46.128336    7944 start.go:83] releasing machines lock for "functional-409700", held for 2.4673029s
	I1217 00:41:46.132380    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:46.192243    7944 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:41:46.196238    7944 ssh_runner.go:195] Run: cat /version.json
	I1217 00:41:46.196238    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.199443    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.250894    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.252723    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.374927    7944 ssh_runner.go:195] Run: systemctl --version
	W1217 00:41:46.375040    7944 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:41:46.393243    7944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:41:46.405015    7944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:41:46.411122    7944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:41:46.427748    7944 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:41:46.427748    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:46.427748    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:46.428359    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:46.459279    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:41:46.481169    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:41:46.495981    7944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:41:46.501301    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:41:46.522269    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:41:46.543007    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:41:46.564748    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 00:41:46.571173    7944 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:41:46.571173    7944 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:41:46.587140    7944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:41:46.608125    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:41:46.628561    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:41:46.651071    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:41:46.670567    7944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:41:46.691876    7944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:41:46.708884    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:46.907593    7944 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:41:47.157536    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:47.157588    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:47.161701    7944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:41:47.187508    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.211591    7944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:41:47.291331    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.315837    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:41:47.336371    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:47.365154    7944 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:41:47.376814    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:41:47.391947    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:41:47.416863    7944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:41:47.573803    7944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:41:47.742508    7944 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:41:47.742508    7944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:41:47.769569    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:41:47.792419    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:47.926195    7944 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:41:48.924753    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:41:48.948387    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:41:48.972423    7944 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:41:49.001034    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.024808    7944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:41:49.170637    7944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:41:49.341524    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.489502    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:41:49.515161    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:41:49.538565    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.678445    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:41:49.792662    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.810919    7944 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:41:49.817201    7944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:41:49.824745    7944 start.go:564] Will wait 60s for crictl version
	I1217 00:41:49.829680    7944 ssh_runner.go:195] Run: which crictl
	I1217 00:41:49.841215    7944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:41:49.886490    7944 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:41:49.890545    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.932656    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.973421    7944 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:41:49.976704    7944 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:41:50.163467    7944 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:41:50.168979    7944 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:41:50.182632    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:50.243980    7944 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:41:50.246233    7944 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:41:50.246321    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:50.249328    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.284688    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.284688    7944 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:41:50.288341    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.318208    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.318208    7944 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:41:50.318208    7944 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:41:50.318208    7944 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:41:50.322786    7944 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:41:50.580992    7944 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:41:50.580992    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:50.580992    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:50.580992    7944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:41:50.580992    7944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:41:50.581552    7944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:41:50.586113    7944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:41:50.602747    7944 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:41:50.606600    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:41:50.618442    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:41:50.639202    7944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:41:50.660303    7944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1217 00:41:50.686181    7944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:41:50.699393    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:50.841016    7944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:41:50.909095    7944 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:41:50.909095    7944 certs.go:195] generating shared ca certs ...
	I1217 00:41:50.909181    7944 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:41:50.909751    7944 certs.go:257] generating profile certs ...
	I1217 00:41:50.911054    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:41:50.911486    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:41:50.911858    7944 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:41:50.913273    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:41:50.913634    7944 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:41:50.913687    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:41:50.913976    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:41:50.914271    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:41:50.914593    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:41:50.915068    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:41:50.916395    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:41:50.945779    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:41:50.974173    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:41:51.006494    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:41:51.039634    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:41:51.069500    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:41:51.095965    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:41:51.124108    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:41:51.153111    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:41:51.181612    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:41:51.209244    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:41:51.236994    7944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:41:51.261730    7944 ssh_runner.go:195] Run: openssl version
	I1217 00:41:51.280852    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.301978    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:41:51.322912    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.331873    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.336845    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.388885    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:41:51.407531    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.426119    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:41:51.446689    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.455113    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.459541    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.507465    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:41:51.525452    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.543170    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:41:51.560439    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.566853    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.571342    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.621647    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:41:51.639899    7944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:41:51.651440    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:41:51.702199    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:41:51.752106    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:41:51.800819    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:41:51.851441    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:41:51.900439    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:41:51.944312    7944 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:51.948688    7944 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:51.985002    7944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:41:51.998839    7944 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:41:51.998925    7944 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:41:52.003287    7944 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:41:52.016206    7944 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.019955    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:52.077101    7944 kubeconfig.go:125] found "functional-409700" server: "https://127.0.0.1:56622"
	I1217 00:41:52.084213    7944 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:41:52.100216    7944 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:24:17.645837868 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:41:50.679316242 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:41:52.100258    7944 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:41:52.104145    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:52.137767    7944 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:41:52.163943    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:41:52.178186    7944 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:28 /etc/kubernetes/scheduler.conf
	
	I1217 00:41:52.182824    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:41:52.204493    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:41:52.219638    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.223951    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:41:52.243159    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.260005    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.264353    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.281662    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:41:52.297828    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.301928    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:41:52.320845    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:41:52.344713    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:52.568408    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.273580    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.519011    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.597190    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.657031    7944 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:41:53.662643    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.162433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.661965    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.162165    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.162422    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.662001    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.162515    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.662491    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.162857    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.662457    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.161782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.162336    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.662670    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.161692    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.663703    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.163358    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.663185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.161803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.663829    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.166542    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.662220    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.162702    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.662389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.162800    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.662296    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.162770    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.662185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.163484    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.662101    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.163166    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.661850    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.163219    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.662450    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.163350    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.661443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.162140    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.662908    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.162389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.662815    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.162317    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.662985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.161953    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.662582    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.162711    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.662384    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.163213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.662951    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.162863    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.162301    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.664439    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.162163    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.663035    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.163263    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.663152    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.161955    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.663328    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.162424    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.662868    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.162408    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.663167    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.162910    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.662394    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.162371    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.662162    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.161992    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.662354    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.162558    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.663353    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.162056    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.662442    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.162717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.662828    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.162856    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.662970    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.162077    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.662936    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.163640    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.662803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.163131    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.662216    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.162136    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.162086    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.663084    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.161766    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.664543    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.162298    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.662872    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.162985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.663388    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.162888    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.662630    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.163272    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.662830    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.163249    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.163651    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.662883    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.163502    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.162911    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.663838    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.163526    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.663376    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.163496    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.662662    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.163562    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.663717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.163610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.662532    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.163860    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.663359    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.162827    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.663347    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.162765    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.663289    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.163097    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.661774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:53.693561    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.693561    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:53.697663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:53.729976    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.729976    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:53.733954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:53.762808    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.762808    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:53.767775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:53.797017    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.797017    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:53.800693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:53.829028    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.829028    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:53.832681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:53.860730    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.860730    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:53.864375    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:53.893858    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.893858    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:53.893858    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:53.893858    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:53.958662    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:53.958662    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:53.990110    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:53.990110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:54.075886    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:54.075886    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:54.075886    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:54.124100    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:54.124100    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:56.693664    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:56.717550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:56.749444    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.749476    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:56.753285    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:56.784073    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.784073    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:56.788320    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:56.817232    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.817232    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:56.821873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:56.853120    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.853120    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:56.857160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:56.887514    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.887514    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:56.891198    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:56.922568    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.922636    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:56.925831    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:56.954531    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.954531    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:56.954531    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:56.954531    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:57.019098    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:57.019098    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:57.050929    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:57.050955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:57.138578    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:57.138578    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:57.138578    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:57.182851    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:57.182851    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:59.736560    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:59.756547    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:59.785666    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.785666    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:59.789191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:59.818090    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.818151    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:59.821701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:59.849198    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.849198    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:59.852824    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:59.880565    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.880565    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:59.884161    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:59.915009    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.915009    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:59.918550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:59.949230    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.949230    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:59.953371    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:59.979962    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.979962    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:59.979962    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:59.979962    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:00.044543    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:00.044543    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:00.075045    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:00.075045    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:00.184096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:00.184096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:00.184096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:00.229125    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:00.229125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:02.788235    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:02.812066    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:02.844035    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.844035    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:02.847391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:02.879346    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.879346    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:02.883507    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:02.911508    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.911573    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:02.915132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:02.944186    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.944186    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:02.948177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:02.977489    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.977489    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:02.980961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:03.009657    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.009657    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:03.013587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:03.042816    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.042816    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:03.042816    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:03.042816    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:03.126456    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:03.126456    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:03.126456    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:03.167566    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:03.167566    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:03.219094    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:03.219094    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:03.285299    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:03.285299    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:05.820619    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:05.845854    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:05.875867    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.875867    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:05.879229    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:05.909558    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.909558    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:05.912556    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:05.942200    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.942273    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:05.945627    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:05.975289    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.975289    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:05.979052    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:06.009570    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.009570    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:06.013210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:06.042977    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.042977    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:06.046640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:06.075849    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.075849    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:06.075849    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:06.075849    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:06.120266    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:06.120266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:06.168821    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:06.168821    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:06.230879    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:06.230879    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:06.260885    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:06.260885    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:06.340031    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:08.845285    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:08.868682    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:08.897291    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.897291    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:08.900871    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:08.928001    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.928001    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:08.931488    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:08.961792    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.961792    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:08.965426    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:08.994180    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.994253    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:08.997983    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:09.026539    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.026539    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:09.030228    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:09.061065    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.061094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:09.064483    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:09.093815    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.093815    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:09.093815    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:09.093815    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:09.173989    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:09.174037    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:09.174037    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:09.214846    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:09.214846    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:09.269685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:09.269685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:09.331802    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:09.331802    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:11.869149    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:11.892656    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:11.921635    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.921635    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:11.926449    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:11.957938    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.957938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:11.961505    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:11.991894    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.991894    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:11.995992    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:12.025039    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.025039    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:12.029016    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:12.060459    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.060459    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:12.064652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:12.096164    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.096164    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:12.100038    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:12.129762    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.129824    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:12.129824    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:12.129824    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:12.194950    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:12.194950    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:12.227435    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:12.227435    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:12.311750    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:12.311750    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:12.311750    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:12.352387    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:12.352387    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:14.907650    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:14.933011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:14.961340    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.961340    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:14.964869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:14.991179    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.991179    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:14.996502    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:15.025325    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.025325    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:15.031024    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:15.058452    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.058452    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:15.062691    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:15.091232    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.091232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:15.096528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:15.127551    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.127551    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:15.131605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:15.161113    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.161113    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:15.161113    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:15.161113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:15.189644    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:15.189644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:15.270306    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:15.270306    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:15.270306    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:15.311714    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:15.311714    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:15.371391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:15.371391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:17.939209    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:17.962095    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:17.990273    7944 logs.go:282] 0 containers: []
	W1217 00:43:17.990273    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:17.993918    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:18.025229    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.025229    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:18.029538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:18.060092    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.060092    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:18.064444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:18.095199    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.095230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:18.098808    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:18.129658    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.129658    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:18.133236    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:18.163628    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.163628    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:18.167493    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:18.199253    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.199253    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:18.199253    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:18.199253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:18.252203    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:18.252203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:18.316097    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:18.316097    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:18.347393    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:18.347393    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:18.426495    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:18.426495    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:18.426495    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:20.972950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:20.998624    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:21.025837    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.025837    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:21.029315    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:21.061085    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.061085    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:21.065387    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:21.092871    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.092871    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:21.096706    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:21.126179    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.126179    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:21.129834    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:21.159720    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.159720    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:21.163263    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:21.193011    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.193011    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:21.196667    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:21.229222    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.229222    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:21.229222    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:21.229222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:21.279391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:21.279391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:21.341649    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:21.341649    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:21.372055    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:21.372055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:21.451011    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:21.451011    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:21.451011    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.011538    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:24.037171    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:24.067520    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.067544    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:24.070755    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:24.101421    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.101454    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:24.104927    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:24.133336    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.133336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:24.137178    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:24.164662    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.164662    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:24.168324    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:24.200218    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.200218    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:24.203764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:24.234603    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.234603    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:24.238011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:24.267400    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.267400    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:24.267400    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:24.267400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:24.348263    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:24.348263    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:24.348263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.393298    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:24.393298    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:24.446709    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:24.446709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:24.518891    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:24.518891    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.054877    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:27.078747    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:27.111142    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.111142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:27.114844    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:27.143801    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.143801    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:27.147663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:27.176215    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.176215    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:27.179758    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:27.208587    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.208587    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:27.211873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:27.241061    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.241061    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:27.244905    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:27.276011    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.276065    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:27.279281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:27.309068    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.309068    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:27.309068    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:27.309068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:27.372079    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:27.372079    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.403215    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:27.403215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:27.502209    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:27.502209    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:27.502209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:27.543251    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:27.543251    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.103213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:30.126929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:30.158148    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.158148    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:30.162286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:30.191927    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.191927    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:30.195748    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:30.225040    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.225040    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:30.229444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:30.260498    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.260498    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:30.264750    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:30.293312    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.293312    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:30.296869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:30.325167    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.325167    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:30.328938    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:30.363267    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.363267    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:30.363267    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:30.363267    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:30.393795    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:30.393795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:30.487446    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:30.487446    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:30.487446    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:30.530226    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:30.530226    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.585635    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:30.585635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:33.151438    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:33.175766    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:33.207203    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.207203    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:33.210965    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:33.237795    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.237795    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:33.242087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:33.273041    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.273041    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:33.277103    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:33.305283    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.305283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:33.309730    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:33.337737    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.337737    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:33.341408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:33.370694    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.370694    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:33.374111    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:33.407836    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.407836    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:33.407836    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:33.407836    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:33.434955    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:33.434955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:33.529365    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:33.529365    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:33.529365    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:33.572145    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:33.572145    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:33.624502    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:33.624502    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.189426    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:36.213378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:36.243407    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.243407    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:36.246746    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:36.274995    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.274995    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:36.278271    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:36.305533    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.305533    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:36.309459    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:36.338892    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.338892    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:36.342669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:36.373516    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.373516    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:36.377003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:36.404831    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.404831    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:36.408515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:36.437790    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.437790    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:36.437790    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:36.437790    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:36.540076    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:36.540076    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:36.540076    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:36.580664    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:36.580664    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:36.635234    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:36.635234    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.695702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:36.695702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.230926    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:39.255012    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:39.288661    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.288661    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:39.293143    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:39.320903    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.320967    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:39.324725    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:39.350161    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.350161    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:39.353696    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:39.380073    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.380073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:39.383515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:39.411510    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.411510    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:39.415491    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:39.449683    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.449683    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:39.453620    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:39.487800    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.487800    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:39.487800    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:39.487800    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:39.552943    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:39.552943    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.582035    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:39.583033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:39.660499    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:39.660499    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:39.660499    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:39.705645    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:39.705645    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:42.267731    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:42.297885    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:42.329299    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.329326    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:42.332959    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:42.361173    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.361173    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:42.365107    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:42.393236    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.393236    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:42.397363    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:42.430949    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.430949    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:42.435377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:42.465696    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.465696    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:42.468849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:42.512182    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.512182    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:42.515699    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:42.545680    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.545680    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:42.545680    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:42.545680    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:42.607372    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:42.607372    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:42.637761    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:42.637761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:42.720140    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:42.720140    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:42.720140    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:42.760712    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:42.760712    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.318861    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:45.345331    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:45.376136    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.376136    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:45.379539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:45.408720    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.408720    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:45.412623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:45.444664    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.444664    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:45.448226    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:45.484195    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.484195    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:45.488022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:45.515242    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.515242    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:45.519184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:45.551260    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.551260    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:45.554894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:45.581795    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.581795    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:45.581795    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:45.581795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:45.625880    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:45.625880    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.678280    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:45.678280    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:45.738938    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:45.738938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:45.770054    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:45.770054    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:45.854057    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.359806    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:48.384092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:48.415158    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.415192    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:48.418996    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:48.446149    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.446149    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:48.449676    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:48.487416    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.487416    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:48.491652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:48.520073    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.520073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:48.524101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:48.550421    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.550421    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:48.554497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:48.583643    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.583666    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:48.587154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:48.616812    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.616812    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:48.616812    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:48.616812    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:48.681323    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:48.681323    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:48.712866    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:48.712866    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:48.798447    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.798447    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:48.798447    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:48.839546    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:48.839546    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.393802    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:51.419527    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:51.453783    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.453783    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:51.457619    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:51.496053    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.496053    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:51.499949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:51.528492    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.528492    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:51.531946    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:51.560363    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.560363    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:51.563875    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:51.597143    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.597143    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:51.600764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:51.630459    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.630459    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:51.634473    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:51.667072    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.667072    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:51.667072    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:51.667072    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.719154    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:51.719154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:51.779761    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:51.779761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:51.810036    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:51.810036    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:51.887952    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:51.887952    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:51.887952    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.434243    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:54.457541    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:54.486698    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.486698    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:54.491137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:54.520500    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.520500    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:54.524176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:54.552487    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.552487    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:54.556310    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:54.585424    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.585424    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:54.588683    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:54.619901    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.619970    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:54.623608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:54.655623    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.655706    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:54.658833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:54.690413    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.690413    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:54.690413    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:54.690492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:54.771466    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:54.771466    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:54.771466    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.813307    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:54.813307    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:54.874633    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:54.875154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:54.937630    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:54.937630    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.472782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:57.497186    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:57.526677    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.526745    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:57.530218    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:57.557916    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.557948    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:57.562041    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:57.590924    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.590924    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:57.594569    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:57.621738    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.621738    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:57.627319    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:57.656111    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.656111    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:57.659689    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:57.690217    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.690217    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:57.693915    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:57.723629    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.723629    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:57.723629    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:57.723688    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:57.788129    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:57.788129    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.818809    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:57.818809    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:57.903055    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:57.903055    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:57.903055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:57.944153    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:57.944153    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.501950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:00.530348    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:00.561749    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.562270    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:00.566179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:00.596812    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.596812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:00.600551    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:00.628898    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.628898    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:00.632187    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:00.661210    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.661255    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:00.664477    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:00.692625    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.692625    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:00.696565    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:00.727420    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.727420    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:00.731176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:00.761041    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.761041    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:00.761041    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:00.761041    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.813195    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:00.813286    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:00.875819    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:00.875819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:00.906004    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:00.906004    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:00.995354    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:00.995354    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:00.995354    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:03.542659    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:03.566401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:03.597875    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.597875    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:03.602087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:03.631114    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.631114    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:03.635275    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:03.664437    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.665863    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:03.669211    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:03.697100    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.697100    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:03.701535    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:03.731200    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.731200    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:03.735391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:03.764893    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.764893    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:03.768303    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:03.799245    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.799245    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:03.799245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:03.799245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:03.863068    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:03.863068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:03.892825    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:03.892825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:03.975253    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:03.975253    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:03.975253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:04.016164    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:04.016164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:06.571695    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:06.597029    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:06.627889    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.627889    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:06.631611    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:06.661118    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.661118    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:06.664736    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:06.694336    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.694336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:06.698523    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:06.728693    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.728693    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:06.732767    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:06.762060    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.762130    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:06.765313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:06.795222    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.795222    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:06.799233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:06.829491    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.829525    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:06.829525    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:06.829558    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:06.858476    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:06.858476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:06.938014    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:06.938014    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:06.938014    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:06.978960    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:06.978960    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:07.027942    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:07.027942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:09.595591    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:09.619202    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:09.648727    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.648727    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:09.653265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:09.684682    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.684682    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:09.688140    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:09.715249    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.715249    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:09.718566    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:09.749969    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.749969    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:09.753003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:09.779832    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.779832    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:09.783608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:09.812286    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.812326    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:09.816849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:09.845801    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.845801    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:09.845801    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:09.845801    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:09.890276    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:09.891278    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:09.945030    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:09.945030    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:10.007215    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:10.007215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:10.037318    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:10.037318    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:10.122162    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.627660    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:12.651516    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:12.684952    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.684952    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:12.688749    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:12.717327    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.717327    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:12.721146    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:12.749548    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.749548    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:12.752616    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:12.784015    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.784015    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:12.787596    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:12.817388    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.817388    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:12.821554    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:12.849737    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.849737    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:12.853589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:12.882735    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.882735    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:12.882735    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:12.882735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:12.966389    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.966389    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:12.966389    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:13.009759    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:13.009759    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:13.057767    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:13.057767    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:13.121685    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:13.121685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:15.659014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:15.683463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:15.714834    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.714857    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:15.718351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:15.749782    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.749812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:15.753368    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:15.782321    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.782321    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:15.785961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:15.816416    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.816416    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:15.822152    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:15.848733    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.848791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:15.852246    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:15.881272    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.881310    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:15.886378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:15.917818    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.917818    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:15.917892    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:15.917892    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:15.983033    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:15.983033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:16.015133    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:16.015133    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:16.105395    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:16.105395    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:16.105438    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:16.146209    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:16.146209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:18.701433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:18.725475    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:18.759149    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.759149    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:18.762892    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:18.795437    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.795437    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:18.799127    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:18.835050    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.835580    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:18.839967    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:18.867222    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.867222    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:18.870583    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:18.899263    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.899263    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:18.902802    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:18.934115    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.934115    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:18.937420    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:18.969205    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.969205    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:18.969205    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:18.969205    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:19.030841    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:19.030841    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:19.061419    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:19.061938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:19.143852    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:19.143852    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:19.143852    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:19.187635    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:19.187709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:21.747174    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:21.771176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:21.800995    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.800995    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:21.804142    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:21.836064    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.836131    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:21.839865    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:21.868223    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.868292    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:21.871954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:21.900714    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.900714    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:21.904281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:21.931611    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.931611    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:21.935666    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:21.963188    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.963188    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:21.967538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:21.994527    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.994527    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:21.994527    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:21.994527    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:22.061635    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:22.061635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:22.093213    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:22.093213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:22.179644    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:22.179644    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:22.179644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:22.223092    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:22.223092    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:24.783065    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:24.806396    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:24.838512    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.838512    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:24.842023    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:24.871052    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.871052    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:24.874639    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:24.903466    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.903466    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:24.906973    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:24.938000    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.938000    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:24.942149    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:24.970337    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.970371    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:24.973308    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:25.003460    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.003460    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:25.007008    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:25.035638    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.035638    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:25.035638    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:25.035638    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:25.097833    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:25.097833    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:25.128758    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:25.128758    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:25.209843    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:25.209843    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:25.209843    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:25.250600    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:25.250600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:27.806610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:27.831257    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:27.864142    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.864142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:27.867995    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:27.897561    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.897561    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:27.900925    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:27.931079    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.931079    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:27.934151    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:27.964321    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.964321    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:27.969534    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:27.999709    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.999709    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:28.002966    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:28.034961    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.035008    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:28.038649    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:28.067733    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.067733    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:28.067733    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:28.067733    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:28.150573    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:28.150573    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:28.150573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:28.192203    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:28.192203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:28.248534    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:28.248624    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:28.306585    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:28.306585    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:30.842138    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:30.867340    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:30.899142    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.899142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:30.903037    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:30.932057    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.932057    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:30.938184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:30.965554    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.965554    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:30.969154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:30.997999    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.997999    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:31.001861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:31.031079    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.031142    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:31.034735    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:31.063582    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.063582    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:31.069235    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:31.098869    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.098948    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:31.098948    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:31.098948    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:31.127253    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:31.127253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:31.211541    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:31.211541    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:31.211541    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:31.258478    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:31.258478    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:31.308932    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:31.308932    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:33.876600    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:33.899781    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:33.930969    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.930969    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:33.934621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:33.964938    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.964938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:33.968775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:33.998741    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.998793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:34.002265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:34.030279    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.030279    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:34.034177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:34.063244    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.063244    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:34.066512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:34.095842    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.095842    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:34.099843    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:34.133173    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.133173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:34.133173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:34.133173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:34.198297    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:34.198297    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:34.229134    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:34.229134    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:34.305327    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:34.305327    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:34.305327    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:34.346912    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:34.346912    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:36.903423    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:36.929005    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:36.959255    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.959255    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:36.962841    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:36.991016    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.991016    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:36.995294    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:37.027615    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.027615    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:37.031225    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:37.063793    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.063793    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:37.067539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:37.098257    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.098257    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:37.104945    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:37.135094    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.135094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:37.139494    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:37.170825    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.170825    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:37.170825    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:37.170825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:37.236025    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:37.236025    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:37.266143    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:37.266143    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:37.356401    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:37.356401    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:37.356401    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:37.397010    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:37.397010    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:39.951831    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:39.975669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:40.007629    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.007629    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:40.011435    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:40.041534    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.041534    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:40.045543    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:40.072927    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.072927    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:40.076835    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:40.104604    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.104604    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:40.108678    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:40.136644    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.136644    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:40.140732    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:40.172579    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.172579    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:40.176191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:40.207078    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.207078    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:40.207078    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:40.207171    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:40.271921    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:40.271921    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:40.302650    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:40.302650    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:40.384552    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:40.384552    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:40.384552    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:40.425377    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:40.425377    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:42.980281    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:43.003860    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:43.036168    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.036168    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:43.040136    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:43.068891    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.068891    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:43.072976    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:43.103823    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.103823    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:43.107774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:43.134339    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.134339    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:43.137929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:43.168166    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.168166    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:43.172279    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:43.200333    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.200333    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:43.204183    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:43.236225    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.236225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:43.236225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:43.236225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:43.280577    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:43.280577    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:43.331604    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:43.331604    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:43.392357    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:43.392357    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:43.423125    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:43.423125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:43.508115    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.013886    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:46.042290    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:46.074707    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.074707    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:46.078216    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:46.109309    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.109309    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:46.112661    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:46.141002    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.141002    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:46.144585    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:46.172550    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.172550    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:46.178681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:46.209054    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.209054    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:46.212761    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:46.242212    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.242212    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:46.245894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:46.273677    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.273677    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:46.273719    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:46.273719    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:46.339840    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:46.339840    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:46.373287    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:46.373287    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:46.452686    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.452686    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:46.452686    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:46.498608    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:46.498608    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.050761    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:49.075428    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:49.105673    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.105673    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:49.109924    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:49.140245    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.140245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:49.143980    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:49.175115    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.175115    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:49.181267    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:49.213667    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.213667    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:49.217486    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:49.249277    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.249277    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:49.252880    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:49.279244    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.279287    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:49.282893    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:49.313826    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.313826    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:49.313826    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:49.313826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:49.395270    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:49.395270    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:49.395270    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:49.439990    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:49.439990    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.493048    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:49.493048    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:49.555675    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:49.555675    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.091191    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:52.121154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:52.152807    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.152807    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:52.157047    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:52.185793    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.185793    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:52.188792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:52.217804    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.218793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:52.221792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:52.253749    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.253749    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:52.257528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:52.286783    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.286783    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:52.290341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:52.319799    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.319799    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:52.323376    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:52.351656    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.351656    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:52.351656    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:52.351656    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:52.395381    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:52.395381    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:52.449049    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:52.449049    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:52.511942    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:52.511942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.541707    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:52.541707    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:52.622537    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.130052    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:55.154497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:55.185053    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.185086    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:55.188968    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:55.215935    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.215935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:55.220385    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:55.249124    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.249159    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:55.253058    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:55.282148    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.282230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:55.285701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:55.315081    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.315081    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:55.320240    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:55.350419    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.350449    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:55.353993    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:55.386346    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.386346    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:55.386346    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:55.386346    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:55.463518    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.463518    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:55.463518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:55.502884    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:55.502884    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:55.567300    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:55.567300    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:55.630547    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:55.630547    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.165717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:58.189522    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:58.223415    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.223415    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:58.227138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:58.256133    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.256133    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:58.259919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:58.289751    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.289751    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:58.293341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:58.323835    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.323835    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:58.327981    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:58.358897    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.358897    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:58.362525    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:58.393696    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.393696    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:58.397786    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:58.426810    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.426810    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:58.426810    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:58.426810    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:58.492668    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:58.492668    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.523854    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:58.523854    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:58.609164    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:58.609164    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:58.609164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:58.654356    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:58.654356    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.211859    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:01.236949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:01.268645    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.268645    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:01.273856    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:01.305336    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.305336    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:01.309133    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:01.339056    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.339056    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:01.343432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:01.373802    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.373802    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:01.378587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:01.408624    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.408624    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:01.414210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:01.446499    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.446499    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:01.450189    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:01.479782    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.479782    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:01.479782    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:01.479829    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.526819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:01.526819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:01.591797    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:01.591797    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:01.624206    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:01.624206    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:01.713187    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:01.713187    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:01.713187    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.261443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:04.286201    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:04.315610    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.315610    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:04.319607    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:04.348007    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.348007    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:04.351825    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:04.378854    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.378854    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:04.382430    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:04.414385    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.414385    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:04.419751    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:04.447734    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.447734    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:04.452650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:04.483414    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.483414    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:04.488519    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:04.520173    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.520173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:04.520173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:04.520173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:04.583573    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:04.583573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:04.615102    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:04.615102    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:04.703186    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:04.703186    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:04.703186    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.745696    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:04.745696    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:07.302305    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:07.327138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:07.357072    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.357072    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:07.361245    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:07.393135    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.393135    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:07.397020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:07.426598    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.426623    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:07.430259    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:07.459216    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.459216    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:07.463233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:07.491206    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.491206    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:07.496432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:07.527082    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.527082    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:07.530080    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:07.563609    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.563609    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:07.563609    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:07.563609    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:07.624175    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:07.624175    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:07.654046    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:07.655373    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:07.733760    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:07.733760    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:07.733760    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:07.775826    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:07.775826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.333009    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:10.359433    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:10.394281    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.394281    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:10.399772    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:10.431921    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.431921    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:10.435941    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:10.466929    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.466929    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:10.469952    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:10.500979    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.500979    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:10.504132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:10.532972    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.532972    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:10.536526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:10.565609    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.565609    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:10.569307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:10.597263    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.597263    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:10.597263    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:10.597263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:10.625496    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:10.625496    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:10.716452    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:10.716452    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:10.716535    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:10.757898    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:10.757898    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.807685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:10.807685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.376757    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:13.401022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:13.433179    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.433179    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:13.438943    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:13.466315    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.466315    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:13.469406    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:13.498170    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.498170    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:13.503463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:13.531045    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.531045    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:13.534623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:13.563549    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.563572    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:13.567173    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:13.595412    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.595412    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:13.599138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:13.627347    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.627347    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:13.627347    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:13.627347    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.687440    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:13.688440    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:13.718641    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:13.718785    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:13.801949    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:13.801949    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:13.801949    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:13.846773    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:13.847288    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.401019    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:16.426837    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:16.461985    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.461985    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:16.465693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:16.494330    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.494354    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:16.497490    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:16.527742    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.527742    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:16.531287    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:16.561095    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.561095    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:16.564902    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:16.594173    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.594173    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:16.597642    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:16.627598    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.627598    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:16.630884    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:16.659950    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.660031    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:16.660031    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:16.660031    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:16.740660    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:16.740692    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:16.740692    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:16.782319    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:16.782319    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.835245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:16.835245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:16.900147    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:16.900147    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.437638    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:19.462468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:19.493244    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.493244    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:19.497367    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:19.526430    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.526430    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:19.530589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:19.559166    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.559222    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:19.562429    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:19.594311    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.594311    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:19.597936    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:19.627339    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.627339    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:19.632033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:19.659648    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.659648    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:19.663351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:19.696628    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.696628    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:19.696628    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:19.696628    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:19.749701    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:19.749701    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:19.809018    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:19.809018    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.838771    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:19.838771    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:19.921290    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:19.921290    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:19.921290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.468833    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:22.494625    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:22.526034    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.526034    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:22.529623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:22.565289    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.565289    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:22.569286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:22.597280    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.597280    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:22.601010    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:22.630330    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.630330    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:22.634511    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:22.663939    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.663939    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:22.667575    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:22.696762    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.696792    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:22.700137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:22.732285    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.732285    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:22.732285    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:22.732285    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:22.814702    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:22.814702    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:22.814702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.864515    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:22.864515    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:22.917896    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:22.917896    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:22.984213    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:22.984213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.517090    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:25.542531    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:25.575294    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.575294    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:25.579526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:25.610041    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.610041    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:25.614160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:25.643682    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.643712    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:25.647264    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:25.679557    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.679557    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:25.685184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:25.712791    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.712791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:25.716775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:25.747803    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.747803    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:25.751621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:25.782130    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.782130    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:25.782130    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:25.782130    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:25.833735    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:25.833735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:25.894476    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:25.894476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.925218    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:25.925218    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:26.009195    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:26.009195    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:26.009195    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.558504    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:28.581900    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:28.615041    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.615041    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:28.619020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:28.647386    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.647386    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:28.651512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:28.679029    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.679029    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:28.682977    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:28.714035    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.714035    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:28.717407    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:28.746896    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.746920    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:28.749895    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:28.782541    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.782574    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:28.786249    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:28.813250    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.813250    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:28.813250    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:28.813250    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:28.891492    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:28.891492    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:28.891492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.934039    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:28.934039    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:28.986066    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:28.986066    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:29.044402    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:29.045400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.579014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:31.605723    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:31.639437    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.639437    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:31.643001    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:31.672858    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.672858    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:31.676418    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:31.706815    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.706815    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:31.711450    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:31.739165    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.739165    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:31.742794    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:31.774213    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.774213    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:31.778092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:31.808021    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.808021    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:31.811911    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:31.841111    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.841174    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:31.841207    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:31.841207    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:31.903600    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:31.903600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.934979    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:31.934979    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:32.016581    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:32.016581    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:32.016581    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:32.059137    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:32.059137    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:34.619048    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:34.642906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:34.676541    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.676541    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:34.680839    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:34.710245    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.710245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:34.715809    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:34.754209    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.754227    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:34.757792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:34.787283    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.787283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:34.790335    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:34.823758    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.823758    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:34.827394    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:34.856153    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.856153    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:34.859978    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:34.890024    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.890024    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:34.890024    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:34.890024    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:34.954222    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:34.954222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:34.985196    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:34.985196    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:35.067666    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:35.067666    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:35.067666    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:35.109711    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:35.109711    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:37.664972    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:37.687969    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:37.717956    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.717956    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:37.721553    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:37.750935    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.750935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:37.755377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:37.786480    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.786480    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:37.790806    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:37.821246    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.821246    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:37.825408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:37.854559    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.854559    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:37.858605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:37.888189    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.888189    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:37.892436    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:37.923454    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.923454    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:37.923454    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:37.923454    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:37.990022    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:37.990022    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:38.021197    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:38.021197    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:38.107061    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:38.107061    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:38.107061    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:38.150052    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:38.150052    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:40.710598    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:40.738050    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:40.769637    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.769637    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:40.773468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:40.810478    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.810478    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:40.814079    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:40.848071    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.848071    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:40.851868    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:40.880725    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.880725    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:40.884928    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:40.915221    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.915221    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:40.919101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:40.951097    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.951097    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:40.955307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:40.990856    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.990901    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:40.990901    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:40.990901    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:41.041987    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:41.042028    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:41.104560    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:41.104560    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:41.134782    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:41.134782    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:41.221096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:41.221096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:41.221096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:43.768841    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:43.807393    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:43.840153    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.840153    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:43.843740    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:43.873589    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.873589    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:43.877086    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:43.906593    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.906593    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:43.910563    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:43.940004    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.940004    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:43.944461    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:43.984818    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.984818    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:43.988580    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:44.016481    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.016481    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:44.020610    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:44.050198    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.050225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:44.050225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:44.050225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:44.096362    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:44.096362    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:44.150219    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:44.150219    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:44.209135    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:44.209135    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:44.240518    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:44.240518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:44.328383    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:46.833977    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:46.856919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:46.889480    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.889480    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:46.893215    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:46.924373    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.924373    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:46.928774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:46.961004    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.961004    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:46.964726    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:47.003673    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.003673    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:47.006719    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:47.040232    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.040232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:47.044112    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:47.074796    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.074796    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:47.078313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:47.109819    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.109819    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:47.109819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:47.109819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:47.173702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:47.174703    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:47.204290    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:47.204290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:47.290268    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:47.290268    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:47.290268    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:47.332308    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:47.332308    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:49.890367    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:49.913613    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:49.943685    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.943685    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:49.947685    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:49.975458    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.975458    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:49.979401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:50.010709    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.010709    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:50.014179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:50.046146    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.046146    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:50.050033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:50.082525    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.082525    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:50.085833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:50.113901    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.113943    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:50.117783    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:50.148202    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.148290    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:50.148290    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:50.148290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:50.208056    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:50.208056    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:50.239113    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:50.239113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:50.326281    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:50.326281    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:50.326281    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:50.369080    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:50.369080    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:52.932111    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:52.956351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:52.989854    7944 logs.go:282] 0 containers: []
	W1217 00:45:52.989854    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:52.995118    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:53.022557    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.022557    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:53.027906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:53.062035    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.062035    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:53.065640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:53.096245    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.096245    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:53.100861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:53.131945    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.131945    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:53.135650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:53.164825    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.164825    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:53.168602    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:53.198961    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.198961    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:53.198961    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:53.198961    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:53.260266    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:53.260266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:53.290682    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:53.290682    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:53.375669    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:53.375669    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:53.375669    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:53.416110    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:53.416110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:55.971979    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:55.991052    7944 kubeadm.go:602] duration metric: took 4m3.9896216s to restartPrimaryControlPlane
	W1217 00:45:55.991052    7944 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:45:55.996485    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:45:56.479923    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:45:56.502762    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:45:56.518662    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:45:56.523597    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:45:56.536371    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:45:56.536371    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:45:56.541198    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:45:56.554668    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:45:56.559154    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:45:56.576197    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:45:56.590283    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:45:56.594634    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:45:56.612520    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.626118    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:45:56.631259    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.648494    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:45:56.661811    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:45:56.665826    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:45:56.684539    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:45:56.809159    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:45:56.895277    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:45:56.990840    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:49:57.581295    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:49:57.581442    7944 kubeadm.go:319] 
	I1217 00:49:57.581498    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:49:57.586513    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:49:57.586513    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:49:57.587141    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:49:57.587141    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:49:57.587666    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:49:57.588407    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:49:57.589479    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:49:57.589618    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:49:57.589771    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:49:57.589895    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:49:57.589957    7944 kubeadm.go:319] OS: Linux
	I1217 00:49:57.590117    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:49:57.590205    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:49:57.590849    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:49:57.591066    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:49:57.591250    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:49:57.591469    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:49:57.591654    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:49:57.594374    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:49:57.598936    7944 out.go:252]   - Booting up control plane ...
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:49:57.599930    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001130665s
	I1217 00:49:57.599930    7944 kubeadm.go:319] 
	I1217 00:49:57.599930    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:49:57.599930    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:49:57.600944    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:49:57.600944    7944 kubeadm.go:319] 
	I1217 00:49:57.601093    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 
	W1217 00:49:57.601093    7944 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001130665s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:49:57.606482    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:49:58.061133    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:49:58.080059    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:49:58.085171    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:49:58.098234    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:49:58.098234    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:49:58.102655    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:49:58.116544    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:49:58.121754    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:49:58.141782    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:49:58.155836    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:49:58.159790    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:49:58.177864    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.192169    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:49:58.196436    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.213653    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:49:58.227417    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:49:58.231893    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:49:58.251588    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:49:58.366677    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:49:58.451159    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:49:58.548545    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:53:59.244804    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:53:59.244874    7944 kubeadm.go:319] 
	I1217 00:53:59.245013    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:53:59.252131    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:53:59.252131    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:53:59.253316    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:53:59.253422    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:53:59.255258    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:53:59.255381    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:53:59.255513    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:53:59.255633    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:53:59.255694    7944 kubeadm.go:319] OS: Linux
	I1217 00:53:59.255790    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:53:59.255877    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:53:59.255998    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:53:59.256094    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:53:59.256215    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:53:59.256364    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:53:59.256426    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:53:59.256548    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:53:59.256670    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:53:59.256888    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:53:59.257050    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:53:59.257070    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:53:59.257070    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:53:59.272325    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:53:59.272325    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:53:59.273353    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:53:59.273480    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:53:59.273606    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:53:59.273733    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:53:59.273865    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:53:59.274182    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:53:59.274309    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:53:59.274434    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:53:59.274560    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:53:59.274685    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:53:59.274812    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:53:59.274938    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:53:59.275063    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:53:59.277866    7944 out.go:252]   - Booting up control plane ...
	I1217 00:53:59.277866    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:53:59.279865    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:53:59.280054    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:53:59.280189    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000873338s
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:53:59.280712    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 
	I1217 00:53:59.280785    7944 kubeadm.go:403] duration metric: took 12m7.3287248s to StartCluster
	I1217 00:53:59.280785    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:59.285017    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:59.529112    7944 cri.go:89] found id: ""
	I1217 00:53:59.529112    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.529112    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:59.529112    7944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:53:59.533754    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:59.574863    7944 cri.go:89] found id: ""
	I1217 00:53:59.574863    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.574863    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:53:59.574863    7944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:53:59.579181    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:59.620688    7944 cri.go:89] found id: ""
	I1217 00:53:59.620688    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.620688    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:53:59.620688    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:59.627987    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:59.676059    7944 cri.go:89] found id: ""
	I1217 00:53:59.676114    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.676114    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:59.676114    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:59.680719    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:59.723707    7944 cri.go:89] found id: ""
	I1217 00:53:59.723707    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.723707    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:59.723707    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:59.729555    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:59.774476    7944 cri.go:89] found id: ""
	I1217 00:53:59.774476    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.774560    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:59.774560    7944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:59.780477    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:59.820909    7944 cri.go:89] found id: ""
	I1217 00:53:59.820909    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.820909    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:59.820909    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:59.820909    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:59.893583    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:59.893583    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:59.926154    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:59.926154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.179462    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.179462    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:54:00.179462    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:54:00.221875    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.221875    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 00:54:00.281055    7944 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:54:00.281122    7944 out.go:285] * 
	W1217 00:54:00.281210    7944 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.281448    7944 out.go:285] * 
	W1217 00:54:00.283315    7944 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:54:00.296133    7944 out.go:203] 
	W1217 00:54:00.298699    7944 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.299289    7944 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:54:00.299350    7944 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:54:00.301526    7944 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799347277Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799352978Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799377780Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799412283Z" level=info msg="Initializing buildkit"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.911073637Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918044834Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918252552Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918284354Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918293455Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:48 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:41:49 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:41:49 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:20.606787   44285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:20.607875   44285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:20.608886   44285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:20.610113   44285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:20.611276   44285 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:41] CPU: 8 PID: 65919 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000795] RIP: 0033:0x7fc513f26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fc513f26af6.
	[  +0.000661] RSP: 002b:00007ffce9a430e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000957] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000792] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000787] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001172] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001257] FS:  0000000000000000 GS:  0000000000000000
	[  +0.952455] CPU: 6 PID: 66046 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000828] RIP: 0033:0x7f7de767eb20
	[  +0.000402] Code: Unable to access opcode bytes at RIP 0x7f7de767eaf6.
	[  +0.000691] RSP: 002b:00007ffdccfc39b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000866] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000810] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001071] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001218] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001105] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001100] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:56:20 up  1:15,  0 user,  load average: 0.66, 0.43, 0.46
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:56:17 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:18 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 505.
	Dec 17 00:56:18 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:18 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:18 functional-409700 kubelet[44125]: E1217 00:56:18.447501   44125 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:18 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:18 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:19 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 506.
	Dec 17 00:56:19 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:19 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:19 functional-409700 kubelet[44153]: E1217 00:56:19.206701   44153 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:19 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:19 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:19 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 507.
	Dec 17 00:56:19 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:19 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:19 functional-409700 kubelet[44180]: E1217 00:56:19.937902   44180 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:19 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:19 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:20 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 508.
	Dec 17 00:56:20 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:20 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:20 functional-409700 kubelet[44295]: E1217 00:56:20.685910   44295 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:20 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:20 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (572.0311ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/StatusCmd (5.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (124.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-409700 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1636: (dbg) Non-zero exit: kubectl --context functional-409700 create deployment hello-node-connect --image kicbase/echo-server: exit status 1 (92.0586ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:56622/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1638: failed to create hello-node deployment with this command "kubectl --context functional-409700 create deployment hello-node-connect --image kicbase/echo-server": exit status 1.
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-409700 describe po hello-node-connect
functional_test.go:1612: (dbg) Non-zero exit: kubectl --context functional-409700 describe po hello-node-connect: exit status 1 (50.3492009s)

                                                
                                                
** stderr ** 
	E1217 00:55:34.943307    1152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:55:45.040534    1152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:55:55.077867    1152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:56:05.121662    1152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:56:15.160698    1152 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1614: "kubectl --context functional-409700 describe po hello-node-connect" failed: exit status 1
functional_test.go:1616: hello-node pod describe:
functional_test.go:1618: (dbg) Run:  kubectl --context functional-409700 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-409700 logs -l app=hello-node-connect: exit status 1 (40.302667s)

                                                
                                                
** stderr ** 
	E1217 00:56:25.302584   13688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:56:35.387811   13688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:56:45.427195   13688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:56:55.468534   13688 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-409700 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-409700 describe svc hello-node-connect
functional_test.go:1624: (dbg) Non-zero exit: kubectl --context functional-409700 describe svc hello-node-connect: exit status 1 (29.3374078s)

                                                
                                                
** stderr ** 
	E1217 00:57:05.609592    7744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:15.693472    7744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"

                                                
                                                
** /stderr **
functional_test.go:1626: "kubectl --context functional-409700 describe svc hello-node-connect" failed: exit status 1
functional_test.go:1628: hello-node svc describe:
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (576.6701ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.3531425s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh        │ functional-409700 ssh -n functional-409700 sudo cat /tmp/does/not/exist/cp-test.txt                                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ addons     │ functional-409700 addons list                                                                                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ addons     │ functional-409700 addons list -o json                                                                                                                     │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/4168.pem                                                                                                    │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /usr/share/ca-certificates/4168.pem                                                                                        │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                  │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/41682.pem                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /usr/share/ca-certificates/41682.pem                                                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                  │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ docker-env │ functional-409700 docker-env                                                                                                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/test/nested/copy/4168/hosts                                                                                           │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo systemctl is-active crio                                                                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │                     │
	│ license    │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image save kicbase/echo-server:functional-409700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image rm kicbase/echo-server:functional-409700 --alsologtostderr                                                                        │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image      │ functional-409700 image save --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:41:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:41:42.742737    7944 out.go:360] Setting OutFile to fd 1692 ...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.785452    7944 out.go:374] Setting ErrFile to fd 2032...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.823093    7944 out.go:368] Setting JSON to false
	I1217 00:41:42.826928    7944 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3691,"bootTime":1765928411,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:41:42.827062    7944 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:41:42.832423    7944 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:41:42.834008    7944 notify.go:221] Checking for updates...
	I1217 00:41:42.836028    7944 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:41:42.837747    7944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:41:42.839400    7944 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:41:42.841743    7944 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:41:42.843853    7944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:41:42.846824    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:42.847138    7944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:41:43.032802    7944 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:41:43.036200    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.287623    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.26443223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.290624    7944 out.go:179] * Using the docker driver based on existing profile
	I1217 00:41:43.295624    7944 start.go:309] selected driver: docker
	I1217 00:41:43.295624    7944 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.295624    7944 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:41:43.302622    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.528811    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.511883839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.567003    7944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:41:43.567003    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:43.567003    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:43.567003    7944 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.571110    7944 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:41:43.575004    7944 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:41:43.577924    7944 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:41:43.581930    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:43.581930    7944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:41:43.581930    7944 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:41:43.581930    7944 cache.go:65] Caching tarball of preloaded images
	I1217 00:41:43.582517    7944 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:41:43.582517    7944 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:41:43.582517    7944 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:41:43.660928    7944 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:41:43.660928    7944 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:41:43.660928    7944 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:41:43.660928    7944 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:41:43.660928    7944 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:41:43.660928    7944 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:41:43.660928    7944 fix.go:54] fixHost starting: 
	I1217 00:41:43.667914    7944 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:41:43.723914    7944 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:41:43.723914    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:41:43.726919    7944 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:41:43.726919    7944 machine.go:94] provisionDockerMachine start ...
	I1217 00:41:43.731914    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:43.796916    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:43.796916    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:43.796916    7944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:41:43.969131    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:43.969131    7944 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:41:43.975058    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.033428    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.033980    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.033980    7944 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:41:44.218389    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:44.221624    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.281826    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.282333    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.282333    7944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:41:44.449024    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:44.449024    7944 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:41:44.449024    7944 ubuntu.go:190] setting up certificates
	I1217 00:41:44.449024    7944 provision.go:84] configureAuth start
	I1217 00:41:44.452071    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:44.516121    7944 provision.go:143] copyHostCerts
	I1217 00:41:44.516430    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:41:44.516430    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:41:44.516430    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:41:44.517399    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:41:44.517399    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:41:44.517399    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:41:44.518364    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:41:44.518364    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:41:44.518364    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:41:44.519103    7944 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:41:44.613354    7944 provision.go:177] copyRemoteCerts
	I1217 00:41:44.617354    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:41:44.620354    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.676405    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:44.805633    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:41:44.840310    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:41:44.872497    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:41:44.899304    7944 provision.go:87] duration metric: took 450.2424ms to configureAuth
	I1217 00:41:44.899304    7944 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:41:44.899304    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:44.902693    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.962192    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.962661    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.962688    7944 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:41:45.129265    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:41:45.129265    7944 ubuntu.go:71] root file system type: overlay
	I1217 00:41:45.129265    7944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:41:45.133980    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.191141    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.191583    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.191676    7944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:41:45.381081    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:41:45.384910    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.439634    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.439634    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.439634    7944 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:41:45.639837    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:45.639837    7944 machine.go:97] duration metric: took 1.9128981s to provisionDockerMachine
	I1217 00:41:45.639837    7944 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:41:45.639837    7944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:41:45.643968    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:41:45.647579    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.702256    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:45.830302    7944 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:41:45.840912    7944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:41:45.840912    7944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:41:45.841469    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:41:45.842433    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:41:45.846605    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:41:45.861850    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:41:45.894051    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:41:45.924540    7944 start.go:296] duration metric: took 284.7004ms for postStartSetup
	I1217 00:41:45.929030    7944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:41:45.931390    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.988238    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.118181    7944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:41:46.128256    7944 fix.go:56] duration metric: took 2.4673029s for fixHost
	I1217 00:41:46.128336    7944 start.go:83] releasing machines lock for "functional-409700", held for 2.4673029s
	I1217 00:41:46.132380    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:46.192243    7944 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:41:46.196238    7944 ssh_runner.go:195] Run: cat /version.json
	I1217 00:41:46.196238    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.199443    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.250894    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.252723    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.374927    7944 ssh_runner.go:195] Run: systemctl --version
	W1217 00:41:46.375040    7944 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:41:46.393243    7944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:41:46.405015    7944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:41:46.411122    7944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:41:46.427748    7944 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:41:46.427748    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:46.427748    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:46.428359    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:46.459279    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:41:46.481169    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:41:46.495981    7944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:41:46.501301    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:41:46.522269    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:41:46.543007    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:41:46.564748    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 00:41:46.571173    7944 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:41:46.571173    7944 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:41:46.587140    7944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:41:46.608125    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:41:46.628561    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:41:46.651071    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:41:46.670567    7944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:41:46.691876    7944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:41:46.708884    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:46.907593    7944 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:41:47.157536    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:47.157588    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:47.161701    7944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:41:47.187508    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.211591    7944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:41:47.291331    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.315837    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:41:47.336371    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:47.365154    7944 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:41:47.376814    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:41:47.391947    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:41:47.416863    7944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:41:47.573803    7944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:41:47.742508    7944 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:41:47.742508    7944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:41:47.769569    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:41:47.792419    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:47.926195    7944 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:41:48.924753    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:41:48.948387    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:41:48.972423    7944 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:41:49.001034    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.024808    7944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:41:49.170637    7944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:41:49.341524    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.489502    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:41:49.515161    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:41:49.538565    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.678445    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:41:49.792662    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.810919    7944 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:41:49.817201    7944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:41:49.824745    7944 start.go:564] Will wait 60s for crictl version
	I1217 00:41:49.829680    7944 ssh_runner.go:195] Run: which crictl
	I1217 00:41:49.841215    7944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:41:49.886490    7944 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:41:49.890545    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.932656    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.973421    7944 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:41:49.976704    7944 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:41:50.163467    7944 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:41:50.168979    7944 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:41:50.182632    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:50.243980    7944 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:41:50.246233    7944 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:41:50.246321    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:50.249328    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.284688    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.284688    7944 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:41:50.288341    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.318208    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.318208    7944 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:41:50.318208    7944 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:41:50.318208    7944 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:41:50.322786    7944 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:41:50.580992    7944 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:41:50.580992    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:50.580992    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:50.580992    7944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:41:50.580992    7944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:41:50.581552    7944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:41:50.586113    7944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:41:50.602747    7944 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:41:50.606600    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:41:50.618442    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:41:50.639202    7944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:41:50.660303    7944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1217 00:41:50.686181    7944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:41:50.699393    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:50.841016    7944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:41:50.909095    7944 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:41:50.909095    7944 certs.go:195] generating shared ca certs ...
	I1217 00:41:50.909181    7944 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:41:50.909751    7944 certs.go:257] generating profile certs ...
	I1217 00:41:50.911054    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:41:50.911486    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:41:50.911858    7944 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:41:50.913273    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:41:50.913634    7944 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:41:50.913687    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:41:50.913976    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:41:50.914271    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:41:50.914593    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:41:50.915068    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:41:50.916395    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:41:50.945779    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:41:50.974173    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:41:51.006494    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:41:51.039634    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:41:51.069500    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:41:51.095965    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:41:51.124108    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:41:51.153111    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:41:51.181612    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:41:51.209244    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:41:51.236994    7944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:41:51.261730    7944 ssh_runner.go:195] Run: openssl version
	I1217 00:41:51.280852    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.301978    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:41:51.322912    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.331873    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.336845    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.388885    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:41:51.407531    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.426119    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:41:51.446689    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.455113    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.459541    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.507465    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:41:51.525452    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.543170    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:41:51.560439    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.566853    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.571342    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.621647    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:41:51.639899    7944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:41:51.651440    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:41:51.702199    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:41:51.752106    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:41:51.800819    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:41:51.851441    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:41:51.900439    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:41:51.944312    7944 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:51.948688    7944 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:51.985002    7944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:41:51.998839    7944 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:41:51.998925    7944 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:41:52.003287    7944 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:41:52.016206    7944 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.019955    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:52.077101    7944 kubeconfig.go:125] found "functional-409700" server: "https://127.0.0.1:56622"
	I1217 00:41:52.084213    7944 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:41:52.100216    7944 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:24:17.645837868 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:41:50.679316242 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:41:52.100258    7944 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:41:52.104145    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:52.137767    7944 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:41:52.163943    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:41:52.178186    7944 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:28 /etc/kubernetes/scheduler.conf
	
	I1217 00:41:52.182824    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:41:52.204493    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:41:52.219638    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.223951    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:41:52.243159    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.260005    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.264353    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.281662    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:41:52.297828    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.301928    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:41:52.320845    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:41:52.344713    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:52.568408    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.273580    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.519011    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.597190    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.657031    7944 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:41:53.662643    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.162433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.661965    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.162165    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.162422    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.662001    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.162515    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.662491    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.162857    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.662457    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.161782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.162336    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.662670    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.161692    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.663703    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.163358    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.663185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.161803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.663829    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.166542    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.662220    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.162702    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.662389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.162800    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.662296    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.162770    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.662185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.163484    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.662101    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.163166    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.661850    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.163219    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.662450    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.163350    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.661443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.162140    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.662908    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.162389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.662815    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.162317    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.662985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.161953    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.662582    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.162711    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.662384    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.163213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.662951    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.162863    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.162301    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.664439    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.162163    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.663035    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.163263    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.663152    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.161955    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.663328    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.162424    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.662868    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.162408    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.663167    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.162910    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.662394    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.162371    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.662162    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.161992    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.662354    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.162558    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.663353    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.162056    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.662442    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.162717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.662828    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.162856    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.662970    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.162077    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.662936    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.163640    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.662803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.163131    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.662216    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.162136    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.162086    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.663084    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.161766    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.664543    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.162298    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.662872    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.162985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.663388    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.162888    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.662630    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.163272    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.662830    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.163249    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.163651    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.662883    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.163502    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.162911    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.663838    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.163526    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.663376    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.163496    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.662662    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.163562    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.663717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.163610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.662532    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.163860    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.663359    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.162827    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.663347    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.162765    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.663289    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.163097    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.661774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:53.693561    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.693561    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:53.697663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:53.729976    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.729976    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:53.733954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:53.762808    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.762808    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:53.767775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:53.797017    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.797017    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:53.800693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:53.829028    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.829028    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:53.832681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:53.860730    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.860730    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:53.864375    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:53.893858    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.893858    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:53.893858    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:53.893858    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:53.958662    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:53.958662    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:53.990110    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:53.990110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:54.075886    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:54.075886    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:54.075886    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:54.124100    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:54.124100    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:56.693664    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:56.717550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:56.749444    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.749476    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:56.753285    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:56.784073    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.784073    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:56.788320    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:56.817232    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.817232    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:56.821873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:56.853120    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.853120    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:56.857160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:56.887514    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.887514    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:56.891198    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:56.922568    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.922636    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:56.925831    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:56.954531    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.954531    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:56.954531    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:56.954531    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:57.019098    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:57.019098    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:57.050929    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:57.050955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:57.138578    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:57.138578    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:57.138578    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:57.182851    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:57.182851    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:59.736560    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:59.756547    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:59.785666    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.785666    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:59.789191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:59.818090    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.818151    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:59.821701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:59.849198    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.849198    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:59.852824    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:59.880565    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.880565    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:59.884161    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:59.915009    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.915009    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:59.918550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:59.949230    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.949230    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:59.953371    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:59.979962    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.979962    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:59.979962    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:59.979962    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:00.044543    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:00.044543    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:00.075045    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:00.075045    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:00.184096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:00.184096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:00.184096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:00.229125    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:00.229125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:02.788235    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:02.812066    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:02.844035    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.844035    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:02.847391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:02.879346    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.879346    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:02.883507    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:02.911508    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.911573    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:02.915132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:02.944186    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.944186    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:02.948177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:02.977489    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.977489    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:02.980961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:03.009657    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.009657    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:03.013587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:03.042816    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.042816    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:03.042816    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:03.042816    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:03.126456    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:03.126456    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:03.126456    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:03.167566    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:03.167566    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:03.219094    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:03.219094    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:03.285299    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:03.285299    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:05.820619    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:05.845854    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:05.875867    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.875867    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:05.879229    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:05.909558    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.909558    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:05.912556    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:05.942200    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.942273    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:05.945627    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:05.975289    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.975289    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:05.979052    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:06.009570    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.009570    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:06.013210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:06.042977    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.042977    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:06.046640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:06.075849    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.075849    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:06.075849    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:06.075849    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:06.120266    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:06.120266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:06.168821    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:06.168821    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:06.230879    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:06.230879    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:06.260885    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:06.260885    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:06.340031    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:08.845285    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:08.868682    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:08.897291    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.897291    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:08.900871    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:08.928001    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.928001    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:08.931488    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:08.961792    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.961792    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:08.965426    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:08.994180    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.994253    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:08.997983    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:09.026539    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.026539    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:09.030228    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:09.061065    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.061094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:09.064483    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:09.093815    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.093815    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:09.093815    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:09.093815    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:09.173989    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:09.174037    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:09.174037    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:09.214846    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:09.214846    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:09.269685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:09.269685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:09.331802    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:09.331802    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:11.869149    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:11.892656    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:11.921635    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.921635    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:11.926449    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:11.957938    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.957938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:11.961505    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:11.991894    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.991894    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:11.995992    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:12.025039    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.025039    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:12.029016    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:12.060459    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.060459    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:12.064652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:12.096164    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.096164    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:12.100038    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:12.129762    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.129824    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:12.129824    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:12.129824    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:12.194950    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:12.194950    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:12.227435    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:12.227435    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:12.311750    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:12.311750    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:12.311750    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:12.352387    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:12.352387    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:14.907650    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:14.933011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:14.961340    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.961340    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:14.964869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:14.991179    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.991179    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:14.996502    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:15.025325    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.025325    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:15.031024    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:15.058452    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.058452    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:15.062691    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:15.091232    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.091232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:15.096528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:15.127551    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.127551    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:15.131605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:15.161113    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.161113    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:15.161113    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:15.161113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:15.189644    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:15.189644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:15.270306    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:15.270306    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:15.270306    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:15.311714    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:15.311714    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:15.371391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:15.371391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:17.939209    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:17.962095    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:17.990273    7944 logs.go:282] 0 containers: []
	W1217 00:43:17.990273    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:17.993918    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:18.025229    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.025229    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:18.029538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:18.060092    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.060092    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:18.064444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:18.095199    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.095230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:18.098808    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:18.129658    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.129658    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:18.133236    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:18.163628    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.163628    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:18.167493    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:18.199253    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.199253    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:18.199253    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:18.199253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:18.252203    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:18.252203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:18.316097    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:18.316097    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:18.347393    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:18.347393    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:18.426495    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:18.426495    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:18.426495    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:20.972950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:20.998624    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:21.025837    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.025837    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:21.029315    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:21.061085    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.061085    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:21.065387    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:21.092871    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.092871    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:21.096706    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:21.126179    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.126179    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:21.129834    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:21.159720    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.159720    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:21.163263    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:21.193011    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.193011    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:21.196667    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:21.229222    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.229222    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:21.229222    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:21.229222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:21.279391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:21.279391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:21.341649    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:21.341649    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:21.372055    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:21.372055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:21.451011    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:21.451011    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:21.451011    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.011538    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:24.037171    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:24.067520    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.067544    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:24.070755    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:24.101421    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.101454    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:24.104927    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:24.133336    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.133336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:24.137178    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:24.164662    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.164662    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:24.168324    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:24.200218    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.200218    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:24.203764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:24.234603    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.234603    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:24.238011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:24.267400    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.267400    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:24.267400    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:24.267400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:24.348263    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:24.348263    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:24.348263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.393298    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:24.393298    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:24.446709    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:24.446709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:24.518891    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:24.518891    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.054877    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:27.078747    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:27.111142    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.111142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:27.114844    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:27.143801    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.143801    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:27.147663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:27.176215    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.176215    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:27.179758    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:27.208587    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.208587    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:27.211873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:27.241061    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.241061    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:27.244905    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:27.276011    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.276065    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:27.279281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:27.309068    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.309068    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:27.309068    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:27.309068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:27.372079    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:27.372079    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.403215    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:27.403215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:27.502209    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:27.502209    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:27.502209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:27.543251    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:27.543251    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.103213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:30.126929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:30.158148    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.158148    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:30.162286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:30.191927    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.191927    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:30.195748    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:30.225040    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.225040    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:30.229444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:30.260498    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.260498    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:30.264750    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:30.293312    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.293312    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:30.296869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:30.325167    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.325167    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:30.328938    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:30.363267    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.363267    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:30.363267    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:30.363267    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:30.393795    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:30.393795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:30.487446    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:30.487446    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:30.487446    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:30.530226    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:30.530226    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.585635    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:30.585635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:33.151438    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:33.175766    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:33.207203    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.207203    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:33.210965    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:33.237795    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.237795    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:33.242087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:33.273041    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.273041    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:33.277103    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:33.305283    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.305283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:33.309730    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:33.337737    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.337737    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:33.341408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:33.370694    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.370694    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:33.374111    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:33.407836    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.407836    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:33.407836    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:33.407836    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:33.434955    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:33.434955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:33.529365    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:33.529365    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:33.529365    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:33.572145    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:33.572145    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:33.624502    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:33.624502    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.189426    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:36.213378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:36.243407    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.243407    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:36.246746    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:36.274995    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.274995    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:36.278271    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:36.305533    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.305533    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:36.309459    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:36.338892    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.338892    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:36.342669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:36.373516    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.373516    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:36.377003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:36.404831    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.404831    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:36.408515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:36.437790    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.437790    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:36.437790    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:36.437790    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:36.540076    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:36.540076    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:36.540076    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:36.580664    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:36.580664    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:36.635234    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:36.635234    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.695702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:36.695702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.230926    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:39.255012    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:39.288661    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.288661    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:39.293143    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:39.320903    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.320967    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:39.324725    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:39.350161    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.350161    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:39.353696    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:39.380073    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.380073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:39.383515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:39.411510    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.411510    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:39.415491    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:39.449683    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.449683    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:39.453620    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:39.487800    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.487800    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:39.487800    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:39.487800    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:39.552943    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:39.552943    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.582035    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:39.583033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:39.660499    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:39.660499    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:39.660499    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:39.705645    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:39.705645    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:42.267731    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:42.297885    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:42.329299    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.329326    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:42.332959    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:42.361173    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.361173    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:42.365107    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:42.393236    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.393236    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:42.397363    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:42.430949    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.430949    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:42.435377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:42.465696    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.465696    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:42.468849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:42.512182    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.512182    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:42.515699    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:42.545680    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.545680    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:42.545680    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:42.545680    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:42.607372    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:42.607372    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:42.637761    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:42.637761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:42.720140    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:42.720140    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:42.720140    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:42.760712    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:42.760712    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.318861    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:45.345331    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:45.376136    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.376136    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:45.379539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:45.408720    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.408720    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:45.412623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:45.444664    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.444664    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:45.448226    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:45.484195    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.484195    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:45.488022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:45.515242    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.515242    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:45.519184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:45.551260    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.551260    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:45.554894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:45.581795    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.581795    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:45.581795    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:45.581795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:45.625880    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:45.625880    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.678280    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:45.678280    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:45.738938    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:45.738938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:45.770054    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:45.770054    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:45.854057    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.359806    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:48.384092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:48.415158    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.415192    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:48.418996    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:48.446149    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.446149    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:48.449676    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:48.487416    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.487416    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:48.491652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:48.520073    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.520073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:48.524101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:48.550421    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.550421    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:48.554497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:48.583643    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.583666    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:48.587154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:48.616812    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.616812    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:48.616812    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:48.616812    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:48.681323    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:48.681323    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:48.712866    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:48.712866    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:48.798447    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.798447    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:48.798447    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:48.839546    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:48.839546    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.393802    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:51.419527    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:51.453783    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.453783    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:51.457619    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:51.496053    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.496053    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:51.499949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:51.528492    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.528492    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:51.531946    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:51.560363    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.560363    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:51.563875    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:51.597143    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.597143    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:51.600764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:51.630459    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.630459    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:51.634473    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:51.667072    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.667072    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:51.667072    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:51.667072    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.719154    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:51.719154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:51.779761    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:51.779761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:51.810036    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:51.810036    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:51.887952    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:51.887952    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:51.887952    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.434243    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:54.457541    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:54.486698    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.486698    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:54.491137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:54.520500    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.520500    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:54.524176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:54.552487    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.552487    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:54.556310    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:54.585424    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.585424    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:54.588683    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:54.619901    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.619970    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:54.623608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:54.655623    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.655706    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:54.658833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:54.690413    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.690413    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:54.690413    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:54.690492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:54.771466    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:54.771466    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:54.771466    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.813307    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:54.813307    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:54.874633    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:54.875154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:54.937630    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:54.937630    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.472782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:57.497186    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:57.526677    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.526745    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:57.530218    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:57.557916    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.557948    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:57.562041    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:57.590924    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.590924    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:57.594569    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:57.621738    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.621738    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:57.627319    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:57.656111    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.656111    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:57.659689    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:57.690217    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.690217    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:57.693915    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:57.723629    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.723629    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:57.723629    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:57.723688    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:57.788129    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:57.788129    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.818809    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:57.818809    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:57.903055    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:57.903055    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:57.903055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:57.944153    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:57.944153    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.501950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:00.530348    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:00.561749    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.562270    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:00.566179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:00.596812    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.596812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:00.600551    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:00.628898    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.628898    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:00.632187    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:00.661210    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.661255    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:00.664477    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:00.692625    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.692625    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:00.696565    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:00.727420    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.727420    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:00.731176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:00.761041    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.761041    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:00.761041    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:00.761041    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.813195    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:00.813286    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:00.875819    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:00.875819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:00.906004    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:00.906004    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:00.995354    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:00.995354    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:00.995354    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:03.542659    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:03.566401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:03.597875    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.597875    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:03.602087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:03.631114    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.631114    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:03.635275    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:03.664437    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.665863    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:03.669211    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:03.697100    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.697100    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:03.701535    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:03.731200    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.731200    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:03.735391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:03.764893    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.764893    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:03.768303    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:03.799245    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.799245    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:03.799245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:03.799245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:03.863068    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:03.863068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:03.892825    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:03.892825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:03.975253    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:03.975253    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:03.975253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:04.016164    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:04.016164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:06.571695    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:06.597029    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:06.627889    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.627889    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:06.631611    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:06.661118    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.661118    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:06.664736    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:06.694336    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.694336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:06.698523    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:06.728693    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.728693    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:06.732767    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:06.762060    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.762130    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:06.765313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:06.795222    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.795222    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:06.799233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:06.829491    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.829525    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:06.829525    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:06.829558    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:06.858476    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:06.858476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:06.938014    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:06.938014    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:06.938014    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:06.978960    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:06.978960    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:07.027942    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:07.027942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:09.595591    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:09.619202    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:09.648727    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.648727    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:09.653265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:09.684682    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.684682    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:09.688140    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:09.715249    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.715249    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:09.718566    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:09.749969    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.749969    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:09.753003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:09.779832    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.779832    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:09.783608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:09.812286    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.812326    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:09.816849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:09.845801    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.845801    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:09.845801    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:09.845801    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:09.890276    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:09.891278    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:09.945030    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:09.945030    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:10.007215    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:10.007215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:10.037318    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:10.037318    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:10.122162    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.627660    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:12.651516    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:12.684952    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.684952    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:12.688749    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:12.717327    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.717327    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:12.721146    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:12.749548    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.749548    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:12.752616    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:12.784015    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.784015    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:12.787596    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:12.817388    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.817388    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:12.821554    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:12.849737    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.849737    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:12.853589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:12.882735    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.882735    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:12.882735    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:12.882735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:12.966389    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.966389    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:12.966389    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:13.009759    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:13.009759    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:13.057767    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:13.057767    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:13.121685    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:13.121685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:15.659014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:15.683463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:15.714834    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.714857    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:15.718351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:15.749782    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.749812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:15.753368    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:15.782321    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.782321    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:15.785961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:15.816416    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.816416    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:15.822152    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:15.848733    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.848791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:15.852246    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:15.881272    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.881310    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:15.886378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:15.917818    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.917818    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:15.917892    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:15.917892    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:15.983033    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:15.983033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:16.015133    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:16.015133    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:16.105395    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:16.105395    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:16.105438    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:16.146209    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:16.146209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:18.701433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:18.725475    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:18.759149    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.759149    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:18.762892    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:18.795437    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.795437    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:18.799127    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:18.835050    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.835580    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:18.839967    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:18.867222    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.867222    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:18.870583    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:18.899263    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.899263    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:18.902802    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:18.934115    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.934115    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:18.937420    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:18.969205    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.969205    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:18.969205    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:18.969205    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:19.030841    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:19.030841    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:19.061419    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:19.061938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:19.143852    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:19.143852    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:19.143852    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:19.187635    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:19.187709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:21.747174    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:21.771176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:21.800995    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.800995    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:21.804142    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:21.836064    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.836131    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:21.839865    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:21.868223    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.868292    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:21.871954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:21.900714    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.900714    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:21.904281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:21.931611    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.931611    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:21.935666    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:21.963188    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.963188    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:21.967538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:21.994527    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.994527    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:21.994527    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:21.994527    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:22.061635    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:22.061635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:22.093213    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:22.093213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:22.179644    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:22.179644    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:22.179644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:22.223092    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:22.223092    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:24.783065    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:24.806396    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:24.838512    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.838512    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:24.842023    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:24.871052    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.871052    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:24.874639    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:24.903466    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.903466    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:24.906973    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:24.938000    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.938000    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:24.942149    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:24.970337    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.970371    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:24.973308    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:25.003460    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.003460    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:25.007008    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:25.035638    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.035638    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:25.035638    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:25.035638    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:25.097833    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:25.097833    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:25.128758    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:25.128758    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:25.209843    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:25.209843    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:25.209843    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:25.250600    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:25.250600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:27.806610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:27.831257    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:27.864142    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.864142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:27.867995    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:27.897561    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.897561    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:27.900925    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:27.931079    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.931079    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:27.934151    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:27.964321    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.964321    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:27.969534    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:27.999709    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.999709    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:28.002966    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:28.034961    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.035008    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:28.038649    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:28.067733    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.067733    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:28.067733    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:28.067733    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:28.150573    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:28.150573    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:28.150573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:28.192203    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:28.192203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:28.248534    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:28.248624    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:28.306585    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:28.306585    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:30.842138    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:30.867340    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:30.899142    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.899142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:30.903037    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:30.932057    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.932057    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:30.938184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:30.965554    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.965554    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:30.969154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:30.997999    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.997999    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:31.001861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:31.031079    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.031142    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:31.034735    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:31.063582    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.063582    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:31.069235    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:31.098869    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.098948    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:31.098948    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:31.098948    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:31.127253    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:31.127253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:31.211541    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:31.211541    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:31.211541    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:31.258478    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:31.258478    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:31.308932    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:31.308932    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:33.876600    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:33.899781    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:33.930969    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.930969    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:33.934621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:33.964938    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.964938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:33.968775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:33.998741    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.998793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:34.002265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:34.030279    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.030279    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:34.034177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:34.063244    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.063244    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:34.066512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:34.095842    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.095842    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:34.099843    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:34.133173    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.133173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:34.133173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:34.133173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:34.198297    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:34.198297    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:34.229134    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:34.229134    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:34.305327    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:34.305327    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:34.305327    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:34.346912    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:34.346912    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:36.903423    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:36.929005    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:36.959255    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.959255    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:36.962841    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:36.991016    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.991016    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:36.995294    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:37.027615    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.027615    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:37.031225    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:37.063793    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.063793    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:37.067539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:37.098257    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.098257    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:37.104945    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:37.135094    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.135094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:37.139494    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:37.170825    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.170825    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:37.170825    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:37.170825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:37.236025    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:37.236025    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:37.266143    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:37.266143    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:37.356401    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:37.356401    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:37.356401    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:37.397010    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:37.397010    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:39.951831    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:39.975669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:40.007629    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.007629    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:40.011435    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:40.041534    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.041534    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:40.045543    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:40.072927    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.072927    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:40.076835    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:40.104604    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.104604    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:40.108678    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:40.136644    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.136644    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:40.140732    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:40.172579    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.172579    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:40.176191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:40.207078    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.207078    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:40.207078    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:40.207171    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:40.271921    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:40.271921    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:40.302650    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:40.302650    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:40.384552    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:40.384552    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:40.384552    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:40.425377    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:40.425377    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:42.980281    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:43.003860    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:43.036168    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.036168    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:43.040136    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:43.068891    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.068891    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:43.072976    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:43.103823    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.103823    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:43.107774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:43.134339    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.134339    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:43.137929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:43.168166    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.168166    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:43.172279    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:43.200333    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.200333    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:43.204183    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:43.236225    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.236225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:43.236225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:43.236225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:43.280577    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:43.280577    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:43.331604    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:43.331604    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:43.392357    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:43.392357    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:43.423125    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:43.423125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:43.508115    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.013886    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:46.042290    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:46.074707    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.074707    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:46.078216    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:46.109309    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.109309    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:46.112661    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:46.141002    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.141002    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:46.144585    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:46.172550    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.172550    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:46.178681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:46.209054    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.209054    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:46.212761    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:46.242212    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.242212    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:46.245894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:46.273677    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.273677    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:46.273719    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:46.273719    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:46.339840    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:46.339840    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:46.373287    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:46.373287    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:46.452686    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.452686    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:46.452686    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:46.498608    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:46.498608    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.050761    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:49.075428    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:49.105673    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.105673    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:49.109924    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:49.140245    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.140245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:49.143980    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:49.175115    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.175115    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:49.181267    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:49.213667    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.213667    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:49.217486    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:49.249277    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.249277    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:49.252880    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:49.279244    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.279287    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:49.282893    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:49.313826    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.313826    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:49.313826    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:49.313826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:49.395270    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:49.395270    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:49.395270    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:49.439990    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:49.439990    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.493048    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:49.493048    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:49.555675    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:49.555675    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.091191    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:52.121154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:52.152807    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.152807    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:52.157047    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:52.185793    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.185793    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:52.188792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:52.217804    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.218793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:52.221792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:52.253749    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.253749    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:52.257528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:52.286783    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.286783    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:52.290341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:52.319799    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.319799    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:52.323376    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:52.351656    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.351656    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:52.351656    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:52.351656    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:52.395381    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:52.395381    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:52.449049    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:52.449049    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:52.511942    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:52.511942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.541707    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:52.541707    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:52.622537    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.130052    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:55.154497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:55.185053    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.185086    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:55.188968    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:55.215935    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.215935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:55.220385    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:55.249124    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.249159    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:55.253058    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:55.282148    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.282230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:55.285701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:55.315081    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.315081    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:55.320240    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:55.350419    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.350449    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:55.353993    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:55.386346    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.386346    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:55.386346    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:55.386346    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:55.463518    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.463518    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:55.463518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:55.502884    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:55.502884    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:55.567300    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:55.567300    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:55.630547    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:55.630547    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.165717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:58.189522    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:58.223415    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.223415    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:58.227138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:58.256133    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.256133    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:58.259919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:58.289751    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.289751    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:58.293341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:58.323835    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.323835    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:58.327981    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:58.358897    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.358897    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:58.362525    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:58.393696    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.393696    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:58.397786    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:58.426810    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.426810    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:58.426810    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:58.426810    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:58.492668    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:58.492668    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.523854    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:58.523854    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:58.609164    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:58.609164    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:58.609164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:58.654356    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:58.654356    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.211859    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:01.236949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:01.268645    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.268645    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:01.273856    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:01.305336    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.305336    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:01.309133    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:01.339056    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.339056    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:01.343432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:01.373802    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.373802    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:01.378587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:01.408624    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.408624    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:01.414210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:01.446499    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.446499    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:01.450189    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:01.479782    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.479782    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:01.479782    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:01.479829    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.526819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:01.526819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:01.591797    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:01.591797    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:01.624206    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:01.624206    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:01.713187    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:01.713187    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:01.713187    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.261443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:04.286201    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:04.315610    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.315610    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:04.319607    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:04.348007    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.348007    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:04.351825    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:04.378854    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.378854    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:04.382430    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:04.414385    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.414385    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:04.419751    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:04.447734    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.447734    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:04.452650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:04.483414    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.483414    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:04.488519    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:04.520173    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.520173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:04.520173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:04.520173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:04.583573    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:04.583573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:04.615102    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:04.615102    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:04.703186    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:04.703186    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:04.703186    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.745696    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:04.745696    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:07.302305    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:07.327138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:07.357072    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.357072    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:07.361245    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:07.393135    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.393135    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:07.397020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:07.426598    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.426623    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:07.430259    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:07.459216    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.459216    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:07.463233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:07.491206    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.491206    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:07.496432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:07.527082    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.527082    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:07.530080    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:07.563609    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.563609    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:07.563609    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:07.563609    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:07.624175    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:07.624175    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:07.654046    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:07.655373    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:07.733760    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:07.733760    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:07.733760    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:07.775826    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:07.775826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.333009    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:10.359433    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:10.394281    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.394281    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:10.399772    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:10.431921    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.431921    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:10.435941    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:10.466929    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.466929    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:10.469952    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:10.500979    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.500979    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:10.504132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:10.532972    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.532972    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:10.536526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:10.565609    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.565609    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:10.569307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:10.597263    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.597263    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:10.597263    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:10.597263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:10.625496    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:10.625496    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:10.716452    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:10.716452    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:10.716535    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:10.757898    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:10.757898    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.807685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:10.807685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.376757    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:13.401022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:13.433179    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.433179    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:13.438943    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:13.466315    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.466315    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:13.469406    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:13.498170    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.498170    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:13.503463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:13.531045    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.531045    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:13.534623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:13.563549    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.563572    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:13.567173    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:13.595412    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.595412    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:13.599138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:13.627347    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.627347    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:13.627347    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:13.627347    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.687440    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:13.688440    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:13.718641    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:13.718785    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:13.801949    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:13.801949    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:13.801949    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:13.846773    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:13.847288    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.401019    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:16.426837    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:16.461985    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.461985    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:16.465693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:16.494330    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.494354    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:16.497490    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:16.527742    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.527742    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:16.531287    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:16.561095    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.561095    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:16.564902    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:16.594173    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.594173    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:16.597642    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:16.627598    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.627598    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:16.630884    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:16.659950    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.660031    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:16.660031    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:16.660031    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:16.740660    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:16.740692    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:16.740692    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:16.782319    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:16.782319    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.835245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:16.835245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:16.900147    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:16.900147    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.437638    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:19.462468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:19.493244    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.493244    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:19.497367    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:19.526430    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.526430    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:19.530589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:19.559166    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.559222    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:19.562429    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:19.594311    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.594311    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:19.597936    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:19.627339    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.627339    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:19.632033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:19.659648    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.659648    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:19.663351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:19.696628    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.696628    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:19.696628    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:19.696628    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:19.749701    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:19.749701    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:19.809018    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:19.809018    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.838771    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:19.838771    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:19.921290    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:19.921290    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:19.921290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.468833    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:22.494625    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:22.526034    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.526034    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:22.529623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:22.565289    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.565289    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:22.569286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:22.597280    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.597280    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:22.601010    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:22.630330    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.630330    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:22.634511    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:22.663939    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.663939    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:22.667575    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:22.696762    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.696792    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:22.700137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:22.732285    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.732285    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:22.732285    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:22.732285    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:22.814702    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:22.814702    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:22.814702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.864515    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:22.864515    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:22.917896    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:22.917896    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:22.984213    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:22.984213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.517090    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:25.542531    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:25.575294    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.575294    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:25.579526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:25.610041    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.610041    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:25.614160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:25.643682    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.643712    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:25.647264    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:25.679557    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.679557    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:25.685184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:25.712791    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.712791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:25.716775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:25.747803    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.747803    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:25.751621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:25.782130    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.782130    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:25.782130    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:25.782130    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:25.833735    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:25.833735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:25.894476    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:25.894476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.925218    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:25.925218    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:26.009195    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:26.009195    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:26.009195    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.558504    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:28.581900    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:28.615041    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.615041    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:28.619020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:28.647386    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.647386    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:28.651512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:28.679029    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.679029    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:28.682977    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:28.714035    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.714035    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:28.717407    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:28.746896    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.746920    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:28.749895    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:28.782541    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.782574    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:28.786249    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:28.813250    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.813250    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:28.813250    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:28.813250    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:28.891492    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:28.891492    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:28.891492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.934039    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:28.934039    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:28.986066    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:28.986066    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:29.044402    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:29.045400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.579014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:31.605723    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:31.639437    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.639437    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:31.643001    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:31.672858    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.672858    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:31.676418    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:31.706815    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.706815    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:31.711450    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:31.739165    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.739165    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:31.742794    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:31.774213    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.774213    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:31.778092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:31.808021    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.808021    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:31.811911    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:31.841111    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.841174    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:31.841207    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:31.841207    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:31.903600    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:31.903600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.934979    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:31.934979    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:32.016581    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:32.016581    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:32.016581    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:32.059137    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:32.059137    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:34.619048    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:34.642906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:34.676541    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.676541    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:34.680839    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:34.710245    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.710245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:34.715809    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:34.754209    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.754227    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:34.757792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:34.787283    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.787283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:34.790335    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:34.823758    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.823758    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:34.827394    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:34.856153    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.856153    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:34.859978    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:34.890024    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.890024    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:34.890024    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:34.890024    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:34.954222    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:34.954222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:34.985196    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:34.985196    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:35.067666    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:35.067666    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:35.067666    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:35.109711    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:35.109711    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:37.664972    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:37.687969    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:37.717956    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.717956    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:37.721553    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:37.750935    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.750935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:37.755377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:37.786480    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.786480    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:37.790806    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:37.821246    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.821246    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:37.825408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:37.854559    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.854559    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:37.858605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:37.888189    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.888189    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:37.892436    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:37.923454    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.923454    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:37.923454    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:37.923454    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:37.990022    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:37.990022    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:38.021197    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:38.021197    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:38.107061    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:38.107061    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:38.107061    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:38.150052    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:38.150052    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:40.710598    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:40.738050    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:40.769637    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.769637    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:40.773468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:40.810478    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.810478    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:40.814079    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:40.848071    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.848071    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:40.851868    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:40.880725    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.880725    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:40.884928    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:40.915221    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.915221    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:40.919101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:40.951097    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.951097    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:40.955307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:40.990856    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.990901    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:40.990901    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:40.990901    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:41.041987    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:41.042028    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:41.104560    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:41.104560    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:41.134782    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:41.134782    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:41.221096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:41.221096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:41.221096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:43.768841    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:43.807393    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:43.840153    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.840153    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:43.843740    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:43.873589    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.873589    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:43.877086    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:43.906593    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.906593    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:43.910563    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:43.940004    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.940004    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:43.944461    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:43.984818    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.984818    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:43.988580    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:44.016481    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.016481    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:44.020610    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:44.050198    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.050225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:44.050225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:44.050225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:44.096362    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:44.096362    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:44.150219    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:44.150219    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:44.209135    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:44.209135    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:44.240518    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:44.240518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:44.328383    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:46.833977    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:46.856919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:46.889480    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.889480    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:46.893215    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:46.924373    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.924373    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:46.928774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:46.961004    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.961004    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:46.964726    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:47.003673    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.003673    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:47.006719    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:47.040232    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.040232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:47.044112    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:47.074796    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.074796    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:47.078313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:47.109819    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.109819    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:47.109819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:47.109819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:47.173702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:47.174703    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:47.204290    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:47.204290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:47.290268    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:47.290268    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:47.290268    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:47.332308    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:47.332308    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:49.890367    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:49.913613    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:49.943685    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.943685    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:49.947685    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:49.975458    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.975458    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:49.979401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:50.010709    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.010709    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:50.014179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:50.046146    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.046146    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:50.050033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:50.082525    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.082525    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:50.085833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:50.113901    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.113943    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:50.117783    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:50.148202    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.148290    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:50.148290    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:50.148290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:50.208056    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:50.208056    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:50.239113    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:50.239113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:50.326281    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:50.326281    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:50.326281    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:50.369080    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:50.369080    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:52.932111    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:52.956351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:52.989854    7944 logs.go:282] 0 containers: []
	W1217 00:45:52.989854    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:52.995118    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:53.022557    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.022557    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:53.027906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:53.062035    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.062035    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:53.065640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:53.096245    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.096245    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:53.100861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:53.131945    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.131945    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:53.135650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:53.164825    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.164825    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:53.168602    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:53.198961    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.198961    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:53.198961    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:53.198961    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:53.260266    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:53.260266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:53.290682    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:53.290682    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:53.375669    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:53.375669    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:53.375669    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:53.416110    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:53.416110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:55.971979    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:55.991052    7944 kubeadm.go:602] duration metric: took 4m3.9896216s to restartPrimaryControlPlane
	W1217 00:45:55.991052    7944 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:45:55.996485    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:45:56.479923    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:45:56.502762    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:45:56.518662    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:45:56.523597    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:45:56.536371    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:45:56.536371    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:45:56.541198    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:45:56.554668    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:45:56.559154    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:45:56.576197    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:45:56.590283    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:45:56.594634    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:45:56.612520    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.626118    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:45:56.631259    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.648494    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:45:56.661811    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:45:56.665826    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:45:56.684539    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:45:56.809159    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:45:56.895277    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:45:56.990840    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:49:57.581295    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:49:57.581442    7944 kubeadm.go:319] 
	I1217 00:49:57.581498    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:49:57.586513    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:49:57.586513    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:49:57.587141    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:49:57.587141    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:49:57.587666    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:49:57.588407    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:49:57.589479    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:49:57.589618    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:49:57.589771    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:49:57.589895    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:49:57.589957    7944 kubeadm.go:319] OS: Linux
	I1217 00:49:57.590117    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:49:57.590205    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:49:57.590849    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:49:57.591066    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:49:57.591250    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:49:57.591469    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:49:57.591654    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:49:57.594374    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:49:57.598936    7944 out.go:252]   - Booting up control plane ...
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:49:57.599930    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001130665s
	I1217 00:49:57.599930    7944 kubeadm.go:319] 
	I1217 00:49:57.599930    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:49:57.599930    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:49:57.600944    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:49:57.600944    7944 kubeadm.go:319] 
	I1217 00:49:57.601093    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 
	W1217 00:49:57.601093    7944 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001130665s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:49:57.606482    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:49:58.061133    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:49:58.080059    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:49:58.085171    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:49:58.098234    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:49:58.098234    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:49:58.102655    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:49:58.116544    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:49:58.121754    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:49:58.141782    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:49:58.155836    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:49:58.159790    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:49:58.177864    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.192169    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:49:58.196436    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.213653    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:49:58.227417    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:49:58.231893    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:49:58.251588    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:49:58.366677    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:49:58.451159    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:49:58.548545    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:53:59.244804    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:53:59.244874    7944 kubeadm.go:319] 
	I1217 00:53:59.245013    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:53:59.252131    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:53:59.252131    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:53:59.253316    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:53:59.253422    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:53:59.255258    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:53:59.255381    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:53:59.255513    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:53:59.255633    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:53:59.255694    7944 kubeadm.go:319] OS: Linux
	I1217 00:53:59.255790    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:53:59.255877    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:53:59.255998    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:53:59.256094    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:53:59.256215    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:53:59.256364    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:53:59.256426    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:53:59.256548    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:53:59.256670    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:53:59.256888    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:53:59.257050    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:53:59.257070    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:53:59.257070    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:53:59.272325    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:53:59.272325    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:53:59.273353    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:53:59.273480    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:53:59.273606    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:53:59.273733    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:53:59.273865    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:53:59.274182    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:53:59.274309    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:53:59.274434    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:53:59.274560    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:53:59.274685    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:53:59.274812    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:53:59.274938    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:53:59.275063    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:53:59.277866    7944 out.go:252]   - Booting up control plane ...
	I1217 00:53:59.277866    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:53:59.279865    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:53:59.280054    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:53:59.280189    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000873338s
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:53:59.280712    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 
	I1217 00:53:59.280785    7944 kubeadm.go:403] duration metric: took 12m7.3287248s to StartCluster
	I1217 00:53:59.280785    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:59.285017    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:59.529112    7944 cri.go:89] found id: ""
	I1217 00:53:59.529112    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.529112    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:59.529112    7944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:53:59.533754    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:59.574863    7944 cri.go:89] found id: ""
	I1217 00:53:59.574863    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.574863    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:53:59.574863    7944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:53:59.579181    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:59.620688    7944 cri.go:89] found id: ""
	I1217 00:53:59.620688    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.620688    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:53:59.620688    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:59.627987    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:59.676059    7944 cri.go:89] found id: ""
	I1217 00:53:59.676114    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.676114    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:59.676114    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:59.680719    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:59.723707    7944 cri.go:89] found id: ""
	I1217 00:53:59.723707    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.723707    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:59.723707    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:59.729555    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:59.774476    7944 cri.go:89] found id: ""
	I1217 00:53:59.774476    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.774560    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:59.774560    7944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:59.780477    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:59.820909    7944 cri.go:89] found id: ""
	I1217 00:53:59.820909    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.820909    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:59.820909    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:59.820909    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:59.893583    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:59.893583    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:59.926154    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:59.926154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.179462    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.179462    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:54:00.179462    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:54:00.221875    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.221875    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 00:54:00.281055    7944 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:54:00.281122    7944 out.go:285] * 
	W1217 00:54:00.281210    7944 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.281448    7944 out.go:285] * 
	W1217 00:54:00.283315    7944 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:54:00.296133    7944 out.go:203] 
	W1217 00:54:00.298699    7944 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.299289    7944 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:54:00.299350    7944 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:54:00.301526    7944 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799347277Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799352978Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799377780Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799412283Z" level=info msg="Initializing buildkit"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.911073637Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918044834Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918252552Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918284354Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918293455Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:48 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:41:49 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:41:49 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:26.691829   46007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:26.692976   46007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:26.694173   46007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:26.695290   46007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:26.697520   46007 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:41] CPU: 8 PID: 65919 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000795] RIP: 0033:0x7fc513f26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fc513f26af6.
	[  +0.000661] RSP: 002b:00007ffce9a430e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000957] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000792] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000787] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001172] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001257] FS:  0000000000000000 GS:  0000000000000000
	[  +0.952455] CPU: 6 PID: 66046 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000828] RIP: 0033:0x7f7de767eb20
	[  +0.000402] Code: Unable to access opcode bytes at RIP 0x7f7de767eaf6.
	[  +0.000691] RSP: 002b:00007ffdccfc39b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000866] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000810] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001071] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001218] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001105] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001100] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:57:26 up  1:16,  0 user,  load average: 0.48, 0.44, 0.46
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:57:23 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:24 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 593.
	Dec 17 00:57:24 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:24 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:24 functional-409700 kubelet[45718]: E1217 00:57:24.448742   45718 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:24 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:24 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:25 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 594.
	Dec 17 00:57:25 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:25 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:25 functional-409700 kubelet[45738]: E1217 00:57:25.193066   45738 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:25 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:25 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:25 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 595.
	Dec 17 00:57:25 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:25 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:25 functional-409700 kubelet[45855]: E1217 00:57:25.937758   45855 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:25 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:25 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:26 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 596.
	Dec 17 00:57:26 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:26 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:26 functional-409700 kubelet[45996]: E1217 00:57:26.678002   45996 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:26 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:26 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (597.715ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmdConnect (124.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (243.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
E1217 00:55:33.705815    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
E1217 00:58:14.115665    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: Get "https://127.0.0.1:56622/api/v1/namespaces/kube-system/pods?labelSelector=integration-test%3Dstorage-provisioner": EOF
helpers_test.go:338: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: WARNING: pod list for "kube-system" "integration-test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
functional_test_pvc_test.go:50: ***** TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim: pod "integration-test=storage-provisioner" failed to start within 4m0s: context deadline exceeded ****
functional_test_pvc_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
functional_test_pvc_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (663.4494ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
functional_test_pvc_test.go:50: status error: exit status 2 (may be ok)
functional_test_pvc_test.go:50: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
functional_test_pvc_test.go:51: failed waiting for storage-provisioner: integration-test=storage-provisioner within 4m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (570.6896ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.4995714s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image save kicbase/echo-server:functional-409700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image rm kicbase/echo-server:functional-409700 --alsologtostderr                                                                        │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image save --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ start          │ -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ start          │ -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ start          │ -p functional-409700 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-409700 --alsologtostderr -v=1                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ update-context │ functional-409700 update-context --alsologtostderr -v=2                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ update-context │ functional-409700 update-context --alsologtostderr -v=2                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ update-context │ functional-409700 update-context --alsologtostderr -v=2                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ image          │ functional-409700 image ls --format short --alsologtostderr                                                                                               │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ image          │ functional-409700 image ls --format yaml --alsologtostderr                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ ssh            │ functional-409700 ssh pgrep buildkitd                                                                                                                     │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ image          │ functional-409700 image build -t localhost/my-image:functional-409700 testdata\build --alsologtostderr                                                    │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ image          │ functional-409700 image ls --format json --alsologtostderr                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ image          │ functional-409700 image ls --format table --alsologtostderr                                                                                               │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:57:29
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:57:29.312362   13608 out.go:360] Setting OutFile to fd 1036 ...
	I1217 00:57:29.401841   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:29.401841   13608 out.go:374] Setting ErrFile to fd 1776...
	I1217 00:57:29.401841   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:29.414842   13608 out.go:368] Setting JSON to false
	I1217 00:57:29.416844   13608 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4637,"bootTime":1765928411,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:57:29.416844   13608 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:57:29.420835   13608 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:57:29.424836   13608 notify.go:221] Checking for updates...
	I1217 00:57:29.426837   13608 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:57:29.428844   13608 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:57:29.430846   13608 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:57:29.432842   13608 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:57:29.435843   13608 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:57:29.165357   10540 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:57:29.165357   10540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:57:29.278361   10540 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:57:29.282363   10540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:29.529841   10540 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-17 00:57:29.506866483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:29.532834   10540 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:57:29.535840   10540 start.go:309] selected driver: docker
	I1217 00:57:29.535840   10540 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:29.535840   10540 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:57:29.589847   10540 out.go:203] 
	W1217 00:57:29.591839   10540 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:57:29.593846   10540 out.go:203] 
	I1217 00:57:29.437835   13608 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:57:29.438844   13608 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:57:29.580837   13608 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:57:29.583837   13608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:29.817352   13608 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:57:29.796553997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:29.820347   13608 out.go:179] * Using the docker driver based on existing profile
	I1217 00:57:29.823346   13608 start.go:309] selected driver: docker
	I1217 00:57:29.823346   13608 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:29.823346   13608 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:57:29.829348   13608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:30.066976   13608 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:57:30.047165036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:30.101067   13608 cni.go:84] Creating CNI manager for ""
	I1217 00:57:30.101067   13608 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:57:30.101067   13608 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:30.105229   13608 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799412283Z" level=info msg="Initializing buildkit"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.911073637Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918044834Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918252552Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918284354Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918293455Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:48 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:41:49 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:41:49 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:57:33 functional-409700 dockerd[21759]: 2025/12/17 00:57:33 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 17 00:57:33 functional-409700 dockerd[21759]: 2025/12/17 00:57:33 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 17 00:57:35 functional-409700 dockerd[21759]: time="2025-12-17T00:57:35.777815339Z" level=info msg="sbJoin: gwep4 ''->'fbacab4187f3', gwep6 ''->''"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:59:25.009312   48572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:25.010550   48572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:25.011701   48572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:25.012545   48572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:59:25.014694   48572 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:41] CPU: 8 PID: 65919 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000795] RIP: 0033:0x7fc513f26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fc513f26af6.
	[  +0.000661] RSP: 002b:00007ffce9a430e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000957] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000792] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000787] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001172] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001257] FS:  0000000000000000 GS:  0000000000000000
	[  +0.952455] CPU: 6 PID: 66046 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000828] RIP: 0033:0x7f7de767eb20
	[  +0.000402] Code: Unable to access opcode bytes at RIP 0x7f7de767eaf6.
	[  +0.000691] RSP: 002b:00007ffdccfc39b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000866] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000810] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001071] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001218] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001105] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001100] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:59:25 up  1:18,  0 user,  load average: 0.25, 0.38, 0.44
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:59:22 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:22 functional-409700 kubelet[48391]: E1217 00:59:22.171114   48391 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:59:22 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:59:22 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:59:22 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 751.
	Dec 17 00:59:22 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:22 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:22 functional-409700 kubelet[48409]: E1217 00:59:22.919311   48409 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:59:22 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:59:22 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:59:23 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 752.
	Dec 17 00:59:23 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:23 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:23 functional-409700 kubelet[48442]: E1217 00:59:23.631890   48442 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:59:23 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:59:23 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:59:24 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 753.
	Dec 17 00:59:24 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:24 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:24 functional-409700 kubelet[48492]: E1217 00:59:24.427721   48492 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:59:24 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:59:24 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:59:25 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 754.
	Dec 17 00:59:25 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:59:25 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (582.8843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PersistentVolumeClaim (243.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.84s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-409700 replace --force -f testdata\mysql.yaml
functional_test.go:1798: (dbg) Non-zero exit: kubectl --context functional-409700 replace --force -f testdata\mysql.yaml: exit status 1 (20.2211616s)

                                                
                                                
** stderr ** 
	E1217 00:56:03.336978    8824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:56:13.423593    8824 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:56622/api?timeout=32s": EOF
	unable to recognize "testdata\\mysql.yaml": Get "https://127.0.0.1:56622/api?timeout=32s": EOF

                                                
                                                
** /stderr **
functional_test.go:1800: failed to kubectl replace mysql: args "kubectl --context functional-409700 replace --force -f testdata\\mysql.yaml" failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (567.9756ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.278672s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL logs: 
-- stdout --
	
	==> Audit <==
	┌────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│  COMMAND   │                                                                                                 ARGS                                                                                                  │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ config     │ functional-409700 config get cpus                                                                                                                                                                     │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh cat /etc/hostname                                                                                                                                                               │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ service    │ functional-409700 service list -o json                                                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ config     │ functional-409700 config unset cpus                                                                                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ config     │ functional-409700 config get cpus                                                                                                                                                                     │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cp         │ functional-409700 cp functional-409700:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2573441544\001\cp-test.txt │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ tunnel     │ functional-409700 tunnel --alsologtostderr                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ tunnel     │ functional-409700 tunnel --alsologtostderr                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ service    │ functional-409700 service --namespace=default --https --url hello-node                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ ssh        │ functional-409700 ssh -n functional-409700 sudo cat /home/docker/cp-test.txt                                                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ tunnel     │ functional-409700 tunnel --alsologtostderr                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ service    │ functional-409700 service hello-node --url --format={{.IP}}                                                                                                                                           │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ cp         │ functional-409700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt                                                                                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ service    │ functional-409700 service hello-node --url                                                                                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │                     │
	│ ssh        │ functional-409700 ssh -n functional-409700 sudo cat /tmp/does/not/exist/cp-test.txt                                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ addons     │ functional-409700 addons list                                                                                                                                                                         │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ addons     │ functional-409700 addons list -o json                                                                                                                                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/4168.pem                                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /usr/share/ca-certificates/4168.pem                                                                                                                                    │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/51391683.0                                                                                                                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/41682.pem                                                                                                                                               │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /usr/share/ca-certificates/41682.pem                                                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/ssl/certs/3ec20f2e.0                                                                                                                                              │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ docker-env │ functional-409700 docker-env                                                                                                                                                                          │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	│ ssh        │ functional-409700 ssh sudo cat /etc/test/nested/copy/4168/hosts                                                                                                                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:55 UTC │ 17 Dec 25 00:55 UTC │
	└────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:41:42
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:41:42.742737    7944 out.go:360] Setting OutFile to fd 1692 ...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.785452    7944 out.go:374] Setting ErrFile to fd 2032...
	I1217 00:41:42.785452    7944 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:41:42.823093    7944 out.go:368] Setting JSON to false
	I1217 00:41:42.826928    7944 start.go:133] hostinfo: {"hostname":"minikube4","uptime":3691,"bootTime":1765928411,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:41:42.827062    7944 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:41:42.832423    7944 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:41:42.834008    7944 notify.go:221] Checking for updates...
	I1217 00:41:42.836028    7944 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:41:42.837747    7944 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:41:42.839400    7944 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:41:42.841743    7944 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:41:42.843853    7944 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:41:42.846824    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:42.847138    7944 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:41:43.032802    7944 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:41:43.036200    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.287623    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.26443223 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.290624    7944 out.go:179] * Using the docker driver based on existing profile
	I1217 00:41:43.295624    7944 start.go:309] selected driver: docker
	I1217 00:41:43.295624    7944 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreD
NSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.295624    7944 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:41:43.302622    7944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:41:43.528811    7944 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:41:43.511883839 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:41:43.567003    7944 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 00:41:43.567003    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:43.567003    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:43.567003    7944 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:43.571110    7944 out.go:179] * Starting "functional-409700" primary control-plane node in "functional-409700" cluster
	I1217 00:41:43.575004    7944 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:41:43.577924    7944 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:41:43.581930    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:43.581930    7944 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:41:43.581930    7944 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:41:43.581930    7944 cache.go:65] Caching tarball of preloaded images
	I1217 00:41:43.582517    7944 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 00:41:43.582517    7944 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:41:43.582517    7944 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\config.json ...
	I1217 00:41:43.660928    7944 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 00:41:43.660928    7944 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 00:41:43.660928    7944 cache.go:243] Successfully downloaded all kic artifacts
	I1217 00:41:43.660928    7944 start.go:360] acquireMachinesLock for functional-409700: {Name:mk3729943c20c012b6c7db136193ce43a4a81cc3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 00:41:43.660928    7944 start.go:364] duration metric: took 0s to acquireMachinesLock for "functional-409700"
	I1217 00:41:43.660928    7944 start.go:96] Skipping create...Using existing machine configuration
	I1217 00:41:43.660928    7944 fix.go:54] fixHost starting: 
	I1217 00:41:43.667914    7944 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
	I1217 00:41:43.723914    7944 fix.go:112] recreateIfNeeded on functional-409700: state=Running err=<nil>
	W1217 00:41:43.723914    7944 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 00:41:43.726919    7944 out.go:252] * Updating the running docker "functional-409700" container ...
	I1217 00:41:43.726919    7944 machine.go:94] provisionDockerMachine start ...
	I1217 00:41:43.731914    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:43.796916    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:43.796916    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:43.796916    7944 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 00:41:43.969131    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:43.969131    7944 ubuntu.go:182] provisioning hostname "functional-409700"
	I1217 00:41:43.975058    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.033428    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.033980    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.033980    7944 main.go:143] libmachine: About to run SSH command:
	sudo hostname functional-409700 && echo "functional-409700" | sudo tee /etc/hostname
	I1217 00:41:44.218389    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: functional-409700
	
	I1217 00:41:44.221624    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.281826    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.282333    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.282333    7944 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfunctional-409700' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 functional-409700/g' /etc/hosts;
				else 
					echo '127.0.1.1 functional-409700' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 00:41:44.449024    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:44.449024    7944 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 00:41:44.449024    7944 ubuntu.go:190] setting up certificates
	I1217 00:41:44.449024    7944 provision.go:84] configureAuth start
	I1217 00:41:44.452071    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:44.516121    7944 provision.go:143] copyHostCerts
	I1217 00:41:44.516430    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 00:41:44.516430    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 00:41:44.516430    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 00:41:44.517399    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 00:41:44.517399    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 00:41:44.517399    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 00:41:44.518364    7944 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 00:41:44.518364    7944 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 00:41:44.518364    7944 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 00:41:44.519103    7944 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.functional-409700 san=[127.0.0.1 192.168.49.2 functional-409700 localhost minikube]
	I1217 00:41:44.613354    7944 provision.go:177] copyRemoteCerts
	I1217 00:41:44.617354    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 00:41:44.620354    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.676405    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:44.805633    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 00:41:44.840310    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 00:41:44.872497    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 00:41:44.899304    7944 provision.go:87] duration metric: took 450.2424ms to configureAuth
	I1217 00:41:44.899304    7944 ubuntu.go:206] setting minikube options for container-runtime
	I1217 00:41:44.899304    7944 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:41:44.902693    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:44.962192    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:44.962661    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:44.962688    7944 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 00:41:45.129265    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 00:41:45.129265    7944 ubuntu.go:71] root file system type: overlay
	I1217 00:41:45.129265    7944 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 00:41:45.133980    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.191141    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.191583    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.191676    7944 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 00:41:45.381081    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 00:41:45.384910    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.439634    7944 main.go:143] libmachine: Using SSH client type: native
	I1217 00:41:45.439634    7944 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 56623 <nil> <nil>}
	I1217 00:41:45.439634    7944 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 00:41:45.639837    7944 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 00:41:45.639837    7944 machine.go:97] duration metric: took 1.9128981s to provisionDockerMachine
	I1217 00:41:45.639837    7944 start.go:293] postStartSetup for "functional-409700" (driver="docker")
	I1217 00:41:45.639837    7944 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 00:41:45.643968    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 00:41:45.647579    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.702256    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:45.830302    7944 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 00:41:45.840912    7944 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 00:41:45.840912    7944 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 00:41:45.840912    7944 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 00:41:45.841469    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 00:41:45.842433    7944 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts -> hosts in /etc/test/nested/copy/4168
	I1217 00:41:45.846605    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs /etc/test/nested/copy/4168
	I1217 00:41:45.861850    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 00:41:45.894051    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts --> /etc/test/nested/copy/4168/hosts (40 bytes)
	I1217 00:41:45.924540    7944 start.go:296] duration metric: took 284.7004ms for postStartSetup
	I1217 00:41:45.929030    7944 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 00:41:45.931390    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:45.988238    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.118181    7944 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 00:41:46.128256    7944 fix.go:56] duration metric: took 2.4673029s for fixHost
	I1217 00:41:46.128336    7944 start.go:83] releasing machines lock for "functional-409700", held for 2.4673029s
	I1217 00:41:46.132380    7944 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" functional-409700
	I1217 00:41:46.192243    7944 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 00:41:46.196238    7944 ssh_runner.go:195] Run: cat /version.json
	I1217 00:41:46.196238    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.199443    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:46.250894    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.252723    7944 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
	I1217 00:41:46.374927    7944 ssh_runner.go:195] Run: systemctl --version
	W1217 00:41:46.375040    7944 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 00:41:46.393243    7944 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 00:41:46.405015    7944 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 00:41:46.411122    7944 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 00:41:46.427748    7944 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 00:41:46.427748    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:46.427748    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:46.428359    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:46.459279    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 00:41:46.481169    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 00:41:46.495981    7944 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 00:41:46.501301    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 00:41:46.522269    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 00:41:46.543007    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 00:41:46.564748    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 00:41:46.571173    7944 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 00:41:46.571173    7944 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 00:41:46.587140    7944 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 00:41:46.608125    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 00:41:46.628561    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 00:41:46.651071    7944 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 00:41:46.670567    7944 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 00:41:46.691876    7944 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 00:41:46.708884    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:46.907593    7944 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 00:41:47.157536    7944 start.go:496] detecting cgroup driver to use...
	I1217 00:41:47.157588    7944 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 00:41:47.161701    7944 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 00:41:47.187508    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.211591    7944 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 00:41:47.291331    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 00:41:47.315837    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 00:41:47.336371    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 00:41:47.365154    7944 ssh_runner.go:195] Run: which cri-dockerd
	I1217 00:41:47.376814    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 00:41:47.391947    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 00:41:47.416863    7944 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 00:41:47.573803    7944 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 00:41:47.742508    7944 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 00:41:47.742508    7944 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 00:41:47.769569    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 00:41:47.792419    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:47.926195    7944 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 00:41:48.924753    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 00:41:48.948387    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 00:41:48.972423    7944 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 00:41:49.001034    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.024808    7944 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 00:41:49.170637    7944 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 00:41:49.341524    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.489502    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 00:41:49.515161    7944 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 00:41:49.538565    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:49.678445    7944 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 00:41:49.792662    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 00:41:49.810919    7944 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 00:41:49.817201    7944 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 00:41:49.824745    7944 start.go:564] Will wait 60s for crictl version
	I1217 00:41:49.829680    7944 ssh_runner.go:195] Run: which crictl
	I1217 00:41:49.841215    7944 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 00:41:49.886490    7944 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 00:41:49.890545    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.932656    7944 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 00:41:49.973421    7944 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 00:41:49.976704    7944 cli_runner.go:164] Run: docker exec -t functional-409700 dig +short host.docker.internal
	I1217 00:41:50.163467    7944 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 00:41:50.168979    7944 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 00:41:50.182632    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:50.243980    7944 out.go:179]   - apiserver.enable-admission-plugins=NamespaceAutoProvision
	I1217 00:41:50.246233    7944 kubeadm.go:884] updating cluster {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 00:41:50.246321    7944 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:41:50.249328    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.284688    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.284688    7944 docker.go:621] Images already preloaded, skipping extraction
	I1217 00:41:50.288341    7944 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 00:41:50.318208    7944 docker.go:691] Got preloaded images: -- stdout --
	minikube-local-cache-test:functional-409700
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	registry.k8s.io/pause:3.3
	registry.k8s.io/pause:3.1
	registry.k8s.io/pause:latest
	
	-- /stdout --
	I1217 00:41:50.318208    7944 cache_images.go:86] Images are preloaded, skipping loading
	I1217 00:41:50.318208    7944 kubeadm.go:935] updating node { 192.168.49.2 8441 v1.35.0-beta.0 docker true true} ...
	I1217 00:41:50.318208    7944 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=functional-409700 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 00:41:50.322786    7944 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 00:41:50.580992    7944 extraconfig.go:125] Overwriting default enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota with user provided enable-admission-plugins=NamespaceAutoProvision for component apiserver
	I1217 00:41:50.580992    7944 cni.go:84] Creating CNI manager for ""
	I1217 00:41:50.580992    7944 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:41:50.580992    7944 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 00:41:50.580992    7944 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8441 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:functional-409700 NodeName:functional-409700 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceAutoProvision] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConf
igOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 00:41:50.581552    7944 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8441
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "functional-409700"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceAutoProvision"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8441
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 00:41:50.586113    7944 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 00:41:50.602747    7944 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 00:41:50.606600    7944 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 00:41:50.618442    7944 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 00:41:50.639202    7944 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 00:41:50.660303    7944 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2075 bytes)
	I1217 00:41:50.686181    7944 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1217 00:41:50.699393    7944 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 00:41:50.841016    7944 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 00:41:50.909095    7944 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700 for IP: 192.168.49.2
	I1217 00:41:50.909095    7944 certs.go:195] generating shared ca certs ...
	I1217 00:41:50.909181    7944 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 00:41:50.909751    7944 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 00:41:50.909751    7944 certs.go:257] generating profile certs ...
	I1217 00:41:50.911054    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\client.key
	I1217 00:41:50.911486    7944 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key.dc66fb1b
	I1217 00:41:50.911858    7944 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key
	I1217 00:41:50.913273    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 00:41:50.913634    7944 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 00:41:50.913687    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 00:41:50.913976    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 00:41:50.914271    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 00:41:50.914593    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 00:41:50.915068    7944 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 00:41:50.916395    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 00:41:50.945779    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 00:41:50.974173    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 00:41:51.006494    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 00:41:51.039634    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 00:41:51.069500    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 00:41:51.095965    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 00:41:51.124108    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\functional-409700\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 00:41:51.153111    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 00:41:51.181612    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 00:41:51.209244    7944 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 00:41:51.236994    7944 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 00:41:51.261730    7944 ssh_runner.go:195] Run: openssl version
	I1217 00:41:51.280852    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.301978    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 00:41:51.322912    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.331873    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.336845    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 00:41:51.388885    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 00:41:51.407531    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.426119    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 00:41:51.446689    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.455113    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.459541    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 00:41:51.507465    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 00:41:51.525452    7944 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.543170    7944 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 00:41:51.560439    7944 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.566853    7944 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.571342    7944 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 00:41:51.621647    7944 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 00:41:51.639899    7944 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 00:41:51.651440    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 00:41:51.702199    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 00:41:51.752106    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 00:41:51.800819    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 00:41:51.851441    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 00:41:51.900439    7944 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 00:41:51.944312    7944 kubeadm.go:401] StartCluster: {Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: D
isableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:41:51.948688    7944 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:51.985002    7944 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 00:41:51.998839    7944 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 00:41:51.998925    7944 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 00:41:52.003287    7944 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 00:41:52.016206    7944 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.019955    7944 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
	I1217 00:41:52.077101    7944 kubeconfig.go:125] found "functional-409700" server: "https://127.0.0.1:56622"
	I1217 00:41:52.084213    7944 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 00:41:52.100216    7944 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 00:24:17.645837868 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 00:41:50.679316242 +0000
	@@ -24,7 +24,7 @@
	   certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	-      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+      value: "NamespaceAutoProvision"
	 controllerManager:
	   extraArgs:
	     - name: "allocate-node-cidrs"
	
	-- /stdout --
	I1217 00:41:52.100258    7944 kubeadm.go:1161] stopping kube-system containers ...
	I1217 00:41:52.104145    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 00:41:52.137767    7944 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 00:41:52.163943    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:41:52.178186    7944 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5635 Dec 17 00:28 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5636 Dec 17 00:28 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5672 Dec 17 00:28 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5584 Dec 17 00:28 /etc/kubernetes/scheduler.conf
	
	I1217 00:41:52.182824    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:41:52.204493    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:41:52.219638    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.223951    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:41:52.243159    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.260005    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.264353    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:41:52.281662    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:41:52.297828    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 00:41:52.301928    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:41:52.320845    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:41:52.344713    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:52.568408    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.273580    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.519011    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.597190    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 00:41:53.657031    7944 api_server.go:52] waiting for apiserver process to appear ...
	I1217 00:41:53.662643    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.162433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:54.661965    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.162165    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:55.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.162422    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:56.662001    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.162515    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:57.662491    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.162857    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:58.662457    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.161782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:41:59.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.162336    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:00.662670    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.161692    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:01.663703    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.163358    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:02.663185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.161803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:03.663829    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.166542    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:04.662220    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.162702    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:05.662389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.162800    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:06.662296    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.162770    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:07.662185    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.163484    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:08.662101    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.163166    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:09.661850    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.163219    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:10.662450    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.163350    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:11.661443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.162140    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:12.662908    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.162389    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:13.662815    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.162317    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:14.662985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.161953    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:15.662582    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.162711    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:16.662384    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.163213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:17.662951    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.162863    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:18.663346    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.162301    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:19.664439    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.162163    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:20.663035    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.163263    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:21.663152    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.161955    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:22.663328    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.162424    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:23.662868    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.162408    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:24.663167    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.162910    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:25.662394    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.162371    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:26.662162    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.161992    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:27.662354    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.162558    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:28.663353    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.162056    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:29.662442    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.162717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:30.662828    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.162856    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:31.662970    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.162077    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:32.662936    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.163640    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:33.662803    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.163131    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:34.662216    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.162136    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:35.662293    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.162086    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:36.663084    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.161766    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:37.664543    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.162298    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:38.662872    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.162985    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:39.663388    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.162888    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:40.662630    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.163272    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:41.662830    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.163249    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:42.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.163651    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:43.662883    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.163502    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:44.662963    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.162911    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:45.663838    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.163526    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:46.663376    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.163496    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:47.662662    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.163562    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:48.663717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.163610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:49.662532    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.163860    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:50.663359    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.162827    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:51.663347    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.162765    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:52.663289    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.163097    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:53.661774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:53.693561    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.693561    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:53.697663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:53.729976    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.729976    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:53.733954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:53.762808    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.762808    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:53.767775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:53.797017    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.797017    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:53.800693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:53.829028    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.829028    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:53.832681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:53.860730    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.860730    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:53.864375    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:53.893858    7944 logs.go:282] 0 containers: []
	W1217 00:42:53.893858    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:53.893858    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:53.893858    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:53.958662    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:53.958662    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:53.990110    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:53.990110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:54.075886    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:54.062994   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.064181   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.068054   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.070063   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:54.071483   23815 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:54.075886    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:54.075886    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:54.124100    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:54.124100    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:56.693664    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:56.717550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:56.749444    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.749476    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:56.753285    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:56.784073    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.784073    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:56.788320    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:56.817232    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.817232    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:56.821873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:56.853120    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.853120    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:56.857160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:56.887514    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.887514    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:56.891198    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:56.922568    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.922636    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:56.925831    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:56.954531    7944 logs.go:282] 0 containers: []
	W1217 00:42:56.954531    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:56.954531    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:56.954531    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:42:57.019098    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:42:57.019098    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:42:57.050929    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:42:57.050955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:42:57.138578    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:42:57.130682   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.131621   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.132913   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.134193   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:42:57.135394   23971 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:42:57.138578    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:42:57.138578    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:42:57.182851    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:42:57.182851    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:42:59.736560    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:42:59.756547    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:42:59.785666    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.785666    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:42:59.789191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:42:59.818090    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.818151    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:42:59.821701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:42:59.849198    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.849198    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:42:59.852824    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:42:59.880565    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.880565    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:42:59.884161    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:42:59.915009    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.915009    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:42:59.918550    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:42:59.949230    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.949230    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:42:59.953371    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:42:59.979962    7944 logs.go:282] 0 containers: []
	W1217 00:42:59.979962    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:42:59.979962    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:42:59.979962    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:00.044543    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:00.044543    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:00.075045    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:00.075045    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:00.184096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:00.172623   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.173411   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.176396   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.177559   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:00.178839   24124 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:00.184096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:00.184096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:00.229125    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:00.229125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:02.788235    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:02.812066    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:02.844035    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.844035    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:02.847391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:02.879346    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.879346    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:02.883507    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:02.911508    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.911573    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:02.915132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:02.944186    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.944186    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:02.948177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:02.977489    7944 logs.go:282] 0 containers: []
	W1217 00:43:02.977489    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:02.980961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:03.009657    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.009657    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:03.013587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:03.042816    7944 logs.go:282] 0 containers: []
	W1217 00:43:03.042816    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:03.042816    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:03.042816    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:03.126456    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:03.115768   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.116665   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.118976   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.119737   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:03.121834   24270 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:03.126456    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:03.126456    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:03.167566    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:03.167566    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:03.219094    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:03.219094    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:03.285299    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:03.285299    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:05.820619    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:05.845854    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:05.875867    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.875867    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:05.879229    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:05.909558    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.909558    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:05.912556    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:05.942200    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.942273    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:05.945627    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:05.975289    7944 logs.go:282] 0 containers: []
	W1217 00:43:05.975289    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:05.979052    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:06.009570    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.009570    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:06.013210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:06.042977    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.042977    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:06.046640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:06.075849    7944 logs.go:282] 0 containers: []
	W1217 00:43:06.075849    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:06.075849    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:06.075849    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:06.120266    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:06.120266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:06.168821    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:06.168821    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:06.230879    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:06.230879    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:06.260885    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:06.260885    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:06.340031    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:06.330529   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.331395   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.334293   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.335557   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:06.336695   24447 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:08.845285    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:08.868682    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:08.897291    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.897291    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:08.900871    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:08.928001    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.928001    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:08.931488    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:08.961792    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.961792    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:08.965426    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:08.994180    7944 logs.go:282] 0 containers: []
	W1217 00:43:08.994253    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:08.997983    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:09.026539    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.026539    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:09.030228    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:09.061065    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.061094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:09.064483    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:09.093815    7944 logs.go:282] 0 containers: []
	W1217 00:43:09.093815    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:09.093815    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:09.093815    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:09.173989    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:09.162229   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164006   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.164905   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.168015   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:09.169720   24576 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:09.174037    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:09.174037    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:09.214846    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:09.214846    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:09.269685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:09.269685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:09.331802    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:09.331802    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:11.869149    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:11.892656    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:11.921635    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.921635    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:11.926449    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:11.957938    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.957938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:11.961505    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:11.991894    7944 logs.go:282] 0 containers: []
	W1217 00:43:11.991894    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:11.995992    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:12.025039    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.025039    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:12.029016    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:12.060459    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.060459    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:12.064652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:12.096164    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.096164    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:12.100038    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:12.129762    7944 logs.go:282] 0 containers: []
	W1217 00:43:12.129824    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:12.129824    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:12.129824    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:12.194950    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:12.194950    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:12.227435    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:12.227435    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:12.311750    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:12.301902   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.303071   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.304222   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.305986   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:12.307529   24731 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:12.311750    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:12.311750    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:12.352387    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:12.352387    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:14.907650    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:14.933011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:14.961340    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.961340    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:14.964869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:14.991179    7944 logs.go:282] 0 containers: []
	W1217 00:43:14.991179    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:14.996502    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:15.025325    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.025325    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:15.031024    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:15.058452    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.058452    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:15.062691    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:15.091232    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.091232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:15.096528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:15.127551    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.127551    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:15.131605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:15.161113    7944 logs.go:282] 0 containers: []
	W1217 00:43:15.161113    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:15.161113    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:15.161113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:15.189644    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:15.189644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:15.270306    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:15.259821   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.260629   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.263303   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.264244   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:15.266788   24878 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:15.270306    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:15.270306    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:15.311714    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:15.311714    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:15.371391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:15.371391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:17.939209    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:17.962095    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:17.990273    7944 logs.go:282] 0 containers: []
	W1217 00:43:17.990273    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:17.993918    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:18.025229    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.025229    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:18.029538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:18.060092    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.060092    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:18.064444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:18.095199    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.095230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:18.098808    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:18.129658    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.129658    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:18.133236    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:18.163628    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.163628    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:18.167493    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:18.199253    7944 logs.go:282] 0 containers: []
	W1217 00:43:18.199253    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:18.199253    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:18.199253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:18.252203    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:18.252203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:18.316097    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:18.316097    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:18.347393    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:18.347393    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:18.426495    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:18.416595   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.417796   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.419140   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.420105   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:18.421235   25042 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:18.426495    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:18.426495    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:20.972950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:20.998624    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:21.025837    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.025837    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:21.029315    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:21.061085    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.061085    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:21.065387    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:21.092871    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.092871    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:21.096706    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:21.126179    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.126179    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:21.129834    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:21.159720    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.159720    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:21.163263    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:21.193011    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.193011    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:21.196667    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:21.229222    7944 logs.go:282] 0 containers: []
	W1217 00:43:21.229222    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:21.229222    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:21.229222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:21.279391    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:21.279391    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:21.341649    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:21.341649    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:21.372055    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:21.372055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:21.451011    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:21.440556   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.441861   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.442811   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.446984   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:21.448016   25192 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:21.451011    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:21.451011    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.011538    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:24.037171    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:24.067520    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.067544    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:24.070755    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:24.101421    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.101454    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:24.104927    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:24.133336    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.133336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:24.137178    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:24.164662    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.164662    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:24.168324    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:24.200218    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.200218    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:24.203764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:24.234603    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.234603    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:24.238011    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:24.267400    7944 logs.go:282] 0 containers: []
	W1217 00:43:24.267400    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:24.267400    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:24.267400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:24.348263    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:24.338918   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.339739   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.341999   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.343378   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:24.344717   25322 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:24.348263    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:24.348263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:24.393298    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:24.393298    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:24.446709    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:24.446709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:24.518891    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:24.518891    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.054877    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:27.078747    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:27.111142    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.111142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:27.114844    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:27.143801    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.143801    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:27.147663    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:27.176215    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.176215    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:27.179758    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:27.208587    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.208587    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:27.211873    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:27.241061    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.241061    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:27.244905    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:27.276011    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.276065    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:27.279281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:27.309068    7944 logs.go:282] 0 containers: []
	W1217 00:43:27.309068    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:27.309068    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:27.309068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:27.372079    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:27.372079    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:27.403215    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:27.403215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:27.502209    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:27.492924   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494023   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.494999   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.496603   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:27.497726   25484 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:27.502209    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:27.502209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:27.543251    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:27.543251    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.103213    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:30.126929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:30.158148    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.158148    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:30.162286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:30.191927    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.191927    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:30.195748    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:30.225040    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.225040    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:30.229444    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:30.260498    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.260498    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:30.264750    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:30.293312    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.293312    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:30.296869    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:30.325167    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.325167    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:30.328938    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:30.363267    7944 logs.go:282] 0 containers: []
	W1217 00:43:30.363267    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:30.363267    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:30.363267    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:30.393795    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:30.393795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:30.487446    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:30.464124   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.465346   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.468428   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.469684   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:30.481402   25634 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:30.487446    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:30.487446    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:30.530226    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:30.530226    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:30.585635    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:30.585635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:33.151438    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:33.175766    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:33.207203    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.207203    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:33.210965    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:33.237795    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.237795    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:33.242087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:33.273041    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.273041    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:33.277103    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:33.305283    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.305283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:33.309730    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:33.337737    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.337737    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:33.341408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:33.370694    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.370694    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:33.374111    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:33.407836    7944 logs.go:282] 0 containers: []
	W1217 00:43:33.407836    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:33.407836    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:33.407836    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:33.434955    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:33.434955    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:33.529365    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:33.517320   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.518450   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.519517   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.520800   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:33.522107   25794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:33.529365    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:33.529365    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:33.572145    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:33.572145    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:33.624502    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:33.624502    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.189426    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:36.213378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:36.243407    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.243407    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:36.246746    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:36.274995    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.274995    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:36.278271    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:36.305533    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.305533    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:36.309459    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:36.338892    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.338892    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:36.342669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:36.373516    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.373516    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:36.377003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:36.404831    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.404831    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:36.408515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:36.437790    7944 logs.go:282] 0 containers: []
	W1217 00:43:36.437790    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:36.437790    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:36.437790    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:36.540076    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:36.526050   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.528341   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.531176   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.532283   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:36.533415   25938 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:36.540076    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:36.540076    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:36.580664    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:36.580664    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:36.635234    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:36.635234    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:36.695702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:36.695702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.230926    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:39.255012    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:39.288661    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.288661    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:39.293143    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:39.320903    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.320967    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:39.324725    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:39.350161    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.350161    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:39.353696    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:39.380073    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.380073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:39.383515    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:39.411510    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.411510    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:39.415491    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:39.449683    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.449683    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:39.453620    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:39.487800    7944 logs.go:282] 0 containers: []
	W1217 00:43:39.487800    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:39.487800    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:39.487800    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:39.552943    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:39.552943    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:39.582035    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:39.583033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:39.660499    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:39.647312   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.648102   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.652665   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654408   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:39.654966   26098 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:39.660499    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:39.660499    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:39.705645    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:39.705645    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:42.267731    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:42.297885    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:42.329299    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.329326    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:42.332959    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:42.361173    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.361173    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:42.365107    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:42.393236    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.393236    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:42.397363    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:42.430949    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.430949    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:42.435377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:42.465696    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.465696    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:42.468849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:42.512182    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.512182    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:42.515699    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:42.545680    7944 logs.go:282] 0 containers: []
	W1217 00:43:42.545680    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:42.545680    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:42.545680    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:42.607372    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:42.607372    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:42.637761    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:42.637761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:42.720140    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:42.709136   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.709905   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.711877   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.712984   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:42.713829   26246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:42.720140    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:42.720140    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:42.760712    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:42.760712    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.318861    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:45.345331    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:45.376136    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.376136    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:45.379539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:45.408720    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.408720    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:45.412623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:45.444664    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.444664    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:45.448226    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:45.484195    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.484195    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:45.488022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:45.515242    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.515242    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:45.519184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:45.551260    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.551260    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:45.554894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:45.581795    7944 logs.go:282] 0 containers: []
	W1217 00:43:45.581795    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:45.581795    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:45.581795    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:45.625880    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:45.625880    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:45.678280    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:45.678280    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:45.738938    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:45.738938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:45.770054    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:45.770054    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:45.854057    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:45.839960   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.842045   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.843544   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.846571   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:45.847420   26412 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.359806    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:48.384092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:48.415158    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.415192    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:48.418996    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:48.446149    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.446149    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:48.449676    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:48.487416    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.487416    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:48.491652    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:48.520073    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.520073    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:48.524101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:48.550421    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.550421    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:48.554497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:48.583643    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.583666    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:48.587154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:48.616812    7944 logs.go:282] 0 containers: []
	W1217 00:43:48.616812    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:48.616812    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:48.616812    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:48.681323    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:48.681323    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:48.712866    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:48.712866    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:48.798447    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:48.788338   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.789333   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.790575   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.791655   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:48.792589   26545 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:48.798447    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:48.798447    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:48.839546    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:48.839546    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.393802    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:51.419527    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:51.453783    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.453783    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:51.457619    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:51.496053    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.496053    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:51.499949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:51.528492    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.528492    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:51.531946    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:51.560363    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.560363    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:51.563875    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:51.597143    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.597143    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:51.600764    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:51.630459    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.630459    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:51.634473    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:51.667072    7944 logs.go:282] 0 containers: []
	W1217 00:43:51.667072    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:51.667072    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:51.667072    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:51.719154    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:51.719154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:51.779761    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:51.779761    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:51.810036    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:51.810036    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:51.887952    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:51.877388   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.878091   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.881129   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.882321   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:51.883227   26710 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:51.887952    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:51.887952    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.434243    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:54.457541    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:54.486698    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.486698    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:54.491137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:54.520500    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.520500    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:54.524176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:54.552487    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.552487    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:54.556310    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:54.585424    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.585424    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:54.588683    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:54.619901    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.619970    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:54.623608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:54.655623    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.655706    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:54.658833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:54.690413    7944 logs.go:282] 0 containers: []
	W1217 00:43:54.690413    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:54.690413    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:54.690492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:54.771466    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:54.760114   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.761075   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.762159   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.763541   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:54.764770   26838 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:54.771466    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:54.771466    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:54.813307    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:54.813307    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:43:54.874633    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:54.875154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:54.937630    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:54.937630    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.472782    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:43:57.497186    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:43:57.526677    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.526745    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:43:57.530218    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:43:57.557916    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.557948    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:43:57.562041    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:43:57.590924    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.590924    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:43:57.594569    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:43:57.621738    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.621738    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:43:57.627319    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:43:57.656111    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.656111    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:43:57.659689    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:43:57.690217    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.690217    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:43:57.693915    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:43:57.723629    7944 logs.go:282] 0 containers: []
	W1217 00:43:57.723629    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:43:57.723629    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:43:57.723688    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:43:57.788129    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:43:57.788129    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:43:57.818809    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:43:57.818809    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:43:57.903055    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:43:57.891485   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.892810   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.893729   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896044   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:43:57.896988   27000 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:43:57.903055    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:43:57.903055    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:43:57.944153    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:43:57.944153    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.501950    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:00.530348    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:00.561749    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.562270    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:00.566179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:00.596812    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.596812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:00.600551    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:00.628898    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.628898    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:00.632187    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:00.661210    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.661255    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:00.664477    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:00.692625    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.692625    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:00.696565    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:00.727420    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.727420    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:00.731176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:00.761041    7944 logs.go:282] 0 containers: []
	W1217 00:44:00.761041    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:00.761041    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:00.761041    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:00.813195    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:00.813286    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:00.875819    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:00.875819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:00.906004    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:00.906004    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:00.995354    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:00.985498   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.986676   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.987771   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.989033   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:00.990260   27163 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:00.995354    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:00.995354    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:03.542659    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:03.566401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:03.597875    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.597875    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:03.602087    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:03.631114    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.631114    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:03.635275    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:03.664437    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.665863    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:03.669211    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:03.697100    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.697100    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:03.701535    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:03.731200    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.731200    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:03.735391    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:03.764893    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.764893    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:03.768303    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:03.799245    7944 logs.go:282] 0 containers: []
	W1217 00:44:03.799245    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:03.799245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:03.799245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:03.863068    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:03.863068    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:03.892825    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:03.892825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:03.975253    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:03.964400   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.965730   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.967384   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.969805   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:03.970929   27299 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:03.975253    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:03.975253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:04.016164    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:04.016164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:06.571695    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:06.597029    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:06.627889    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.627889    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:06.631611    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:06.661118    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.661118    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:06.664736    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:06.694336    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.694336    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:06.698523    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:06.728693    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.728693    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:06.732767    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:06.762060    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.762130    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:06.765313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:06.795222    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.795222    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:06.799233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:06.829491    7944 logs.go:282] 0 containers: []
	W1217 00:44:06.829525    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:06.829525    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:06.829558    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:06.858476    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:06.858476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:06.938014    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:06.927171   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.928103   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.929321   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.932292   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:06.933974   27442 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:06.938014    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:06.938014    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:06.978960    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:06.978960    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:07.027942    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:07.027942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:09.595591    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:09.619202    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:09.648727    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.648727    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:09.653265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:09.684682    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.684682    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:09.688140    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:09.715249    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.715249    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:09.718566    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:09.749969    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.749969    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:09.753003    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:09.779832    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.779832    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:09.783608    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:09.812286    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.812326    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:09.816849    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:09.845801    7944 logs.go:282] 0 containers: []
	W1217 00:44:09.845801    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:09.845801    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:09.845801    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:09.890276    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:09.891278    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:09.945030    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:09.945030    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:10.007215    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:10.007215    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:10.037318    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:10.037318    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:10.122162    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:10.111724   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.112922   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.114124   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.115187   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:10.116442   27617 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.627660    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:12.651516    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:12.684952    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.684952    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:12.688749    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:12.717327    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.717327    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:12.721146    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:12.749548    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.749548    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:12.752616    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:12.784015    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.784015    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:12.787596    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:12.817388    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.817388    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:12.821554    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:12.849737    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.849737    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:12.853589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:12.882735    7944 logs.go:282] 0 containers: []
	W1217 00:44:12.882735    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:12.882735    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:12.882735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:12.966389    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:12.956160   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957149   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.957910   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.960356   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:12.961793   27744 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:12.966389    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:12.966389    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:13.009759    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:13.009759    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:13.057767    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:13.057767    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:13.121685    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:13.121685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:15.659014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:15.683463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:15.714834    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.714857    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:15.718351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:15.749782    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.749812    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:15.753368    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:15.782321    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.782321    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:15.785961    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:15.816416    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.816416    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:15.822152    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:15.848733    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.848791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:15.852246    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:15.881272    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.881310    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:15.886378    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:15.917818    7944 logs.go:282] 0 containers: []
	W1217 00:44:15.917818    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:15.917892    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:15.917892    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:15.983033    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:15.983033    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:16.015133    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:16.015133    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:16.105395    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:16.093215   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.094155   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.098670   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100261   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:16.100776   27899 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:16.105395    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:16.105438    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:16.146209    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:16.146209    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:18.701433    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:18.725475    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:18.759149    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.759149    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:18.762892    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:18.795437    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.795437    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:18.799127    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:18.835050    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.835580    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:18.839967    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:18.867222    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.867222    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:18.870583    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:18.899263    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.899263    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:18.902802    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:18.934115    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.934115    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:18.937420    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:18.969205    7944 logs.go:282] 0 containers: []
	W1217 00:44:18.969205    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:18.969205    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:18.969205    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:19.030841    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:19.030841    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:19.061419    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:19.061938    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:19.143852    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:19.132860   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.133712   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.136777   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.137881   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:19.138767   28052 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:19.143852    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:19.143852    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:19.187635    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:19.187709    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:21.747174    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:21.771176    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:21.800995    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.800995    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:21.804142    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:21.836064    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.836131    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:21.839865    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:21.868223    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.868292    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:21.871954    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:21.900714    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.900714    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:21.904281    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:21.931611    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.931611    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:21.935666    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:21.963188    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.963188    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:21.967538    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:21.994527    7944 logs.go:282] 0 containers: []
	W1217 00:44:21.994527    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:21.994527    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:21.994527    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:22.061635    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:22.061635    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:22.093213    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:22.093213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:22.179644    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:22.168849   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.170300   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.172127   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.174562   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:22.176641   28203 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:22.179644    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:22.179644    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:22.223092    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:22.223092    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:24.783065    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:24.806396    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:24.838512    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.838512    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:24.842023    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:24.871052    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.871052    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:24.874639    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:24.903466    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.903466    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:24.906973    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:24.938000    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.938000    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:24.942149    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:24.970337    7944 logs.go:282] 0 containers: []
	W1217 00:44:24.970371    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:24.973308    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:25.003460    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.003460    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:25.007008    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:25.035638    7944 logs.go:282] 0 containers: []
	W1217 00:44:25.035638    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:25.035638    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:25.035638    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:25.097833    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:25.097833    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:25.128758    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:25.128758    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:25.209843    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:25.201498   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.202808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.204759   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.205808   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:25.207251   28352 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:25.209843    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:25.209843    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:25.250600    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:25.250600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:27.806610    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:27.831257    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:27.864142    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.864142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:27.867995    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:27.897561    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.897561    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:27.900925    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:27.931079    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.931079    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:27.934151    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:27.964321    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.964321    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:27.969534    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:27.999709    7944 logs.go:282] 0 containers: []
	W1217 00:44:27.999709    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:28.002966    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:28.034961    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.035008    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:28.038649    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:28.067733    7944 logs.go:282] 0 containers: []
	W1217 00:44:28.067733    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:28.067733    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:28.067733    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:28.150573    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:28.140463   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.141608   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.143366   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.146165   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:28.147662   28498 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:28.150573    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:28.150573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:28.192203    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:28.192203    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:28.248534    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:28.248624    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:28.306585    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:28.306585    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:30.842138    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:30.867340    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:30.899142    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.899142    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:30.903037    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:30.932057    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.932057    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:30.938184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:30.965554    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.965554    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:30.969154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:30.997999    7944 logs.go:282] 0 containers: []
	W1217 00:44:30.997999    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:31.001861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:31.031079    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.031142    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:31.034735    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:31.063582    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.063582    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:31.069235    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:31.098869    7944 logs.go:282] 0 containers: []
	W1217 00:44:31.098948    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:31.098948    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:31.098948    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:31.127253    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:31.127253    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:31.211541    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:31.202334   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.203549   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.205527   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.206517   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:31.207872   28652 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:31.211541    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:31.211541    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:31.258478    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:31.258478    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:31.308932    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:31.308932    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:33.876600    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:33.899781    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:33.930969    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.930969    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:33.934621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:33.964938    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.964938    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:33.968775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:33.998741    7944 logs.go:282] 0 containers: []
	W1217 00:44:33.998793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:34.002265    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:34.030279    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.030279    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:34.034177    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:34.063244    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.063244    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:34.066512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:34.095842    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.095842    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:34.099843    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:34.133173    7944 logs.go:282] 0 containers: []
	W1217 00:44:34.133173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:34.133173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:34.133173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:34.198297    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:34.198297    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:34.229134    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:34.229134    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:34.305327    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:34.295599   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.296405   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.298959   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.301044   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:34.302073   28820 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:34.305327    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:34.305327    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:34.346912    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:34.346912    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:36.903423    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:36.929005    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:36.959255    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.959255    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:36.962841    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:36.991016    7944 logs.go:282] 0 containers: []
	W1217 00:44:36.991016    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:36.995294    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:37.027615    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.027615    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:37.031225    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:37.063793    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.063793    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:37.067539    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:37.098257    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.098257    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:37.104945    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:37.135094    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.135094    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:37.139494    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:37.170825    7944 logs.go:282] 0 containers: []
	W1217 00:44:37.170825    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:37.170825    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:37.170825    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:37.236025    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:37.236025    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:37.266143    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:37.266143    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:37.356401    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:37.344016   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.345140   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.346045   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.350812   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:37.351984   28970 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:37.356401    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:37.356401    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:37.397010    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:37.397010    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:39.951831    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:39.975669    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:40.007629    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.007629    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:40.011435    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:40.041534    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.041534    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:40.045543    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:40.072927    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.072927    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:40.076835    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:40.104604    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.104604    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:40.108678    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:40.136644    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.136644    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:40.140732    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:40.172579    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.172579    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:40.176191    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:40.207078    7944 logs.go:282] 0 containers: []
	W1217 00:44:40.207078    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:40.207078    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:40.207171    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:40.271921    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:40.271921    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:40.302650    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:40.302650    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:40.384552    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:40.373909   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.375248   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.376424   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.377960   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:40.378727   29120 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:40.384552    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:40.384552    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:40.425377    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:40.425377    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:42.980281    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:43.003860    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:43.036168    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.036168    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:43.040136    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:43.068891    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.068891    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:43.072976    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:43.103823    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.103823    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:43.107774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:43.134339    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.134339    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:43.137929    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:43.168166    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.168166    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:43.172279    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:43.200333    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.200333    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:43.204183    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:43.236225    7944 logs.go:282] 0 containers: []
	W1217 00:44:43.236225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:43.236225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:43.236225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:43.280577    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:43.280577    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:43.331604    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:43.331604    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:43.392357    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:43.392357    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:43.423125    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:43.423125    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:43.508115    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:43.496794   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.498087   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.499982   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.501972   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:43.502846   29288 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.013886    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:46.042290    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:46.074707    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.074707    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:46.078216    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:46.109309    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.109309    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:46.112661    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:46.141002    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.141002    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:46.144585    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:46.172550    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.172550    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:46.178681    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:46.209054    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.209054    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:46.212761    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:46.242212    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.242212    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:46.245894    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:46.273677    7944 logs.go:282] 0 containers: []
	W1217 00:44:46.273677    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:46.273719    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:46.273719    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:46.339840    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:46.339840    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:46.373287    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:46.373287    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:46.452686    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:46.442520   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.443589   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.446075   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.448524   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:46.449556   29425 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:46.452686    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:46.452686    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:46.498608    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:46.498608    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.050761    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:49.075428    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:49.105673    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.105673    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:49.109924    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:49.140245    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.140245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:49.143980    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:49.175115    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.175115    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:49.181267    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:49.213667    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.213667    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:49.217486    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:49.249277    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.249277    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:49.252880    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:49.279244    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.279287    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:49.282893    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:49.313826    7944 logs.go:282] 0 containers: []
	W1217 00:44:49.313826    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:49.313826    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:49.313826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:49.395270    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:49.385168   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.385960   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.388757   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.390178   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:49.391697   29569 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:49.395270    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:49.395270    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:49.439990    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:49.439990    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:49.493048    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:49.493048    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:49.555675    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:49.555675    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.091191    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:52.121154    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:52.152807    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.152807    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:52.157047    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:52.185793    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.185793    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:52.188792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:52.217804    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.218793    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:52.221792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:52.253749    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.253749    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:52.257528    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:52.286783    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.286783    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:52.290341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:52.319799    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.319799    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:52.323376    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:52.351656    7944 logs.go:282] 0 containers: []
	W1217 00:44:52.351656    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:52.351656    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:52.351656    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:52.395381    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:52.395381    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:52.449049    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:52.449049    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:52.511942    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:52.511942    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:52.541707    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:52.541707    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:52.622537    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:52.614766   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.615704   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.616948   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.617983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:52.618983   29738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.130052    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:55.154497    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:55.185053    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.185086    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:55.188968    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:55.215935    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.215935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:55.220385    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:55.249124    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.249159    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:55.253058    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:55.282148    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.282230    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:55.285701    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:55.315081    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.315081    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:55.320240    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:55.350419    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.350449    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:55.353993    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:55.386346    7944 logs.go:282] 0 containers: []
	W1217 00:44:55.386346    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:55.386346    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:55.386346    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:55.463518    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:55.456649   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.457723   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.458695   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.460286   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:55.461389   29871 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:55.463518    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:55.463518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:55.502884    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:55.502884    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:44:55.567300    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:55.567300    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:55.630547    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:55.630547    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.165717    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:44:58.189522    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:44:58.223415    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.223415    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:44:58.227138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:44:58.256133    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.256133    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:44:58.259919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:44:58.289751    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.289751    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:44:58.293341    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:44:58.323835    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.323835    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:44:58.327981    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:44:58.358897    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.358897    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:44:58.362525    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:44:58.393696    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.393696    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:44:58.397786    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:44:58.426810    7944 logs.go:282] 0 containers: []
	W1217 00:44:58.426810    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:44:58.426810    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:44:58.426810    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:44:58.492668    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:44:58.492668    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:44:58.523854    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:44:58.523854    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:44:58.609164    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:44:58.598901   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.599812   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.602076   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.604272   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:44:58.606217   30032 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:44:58.609164    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:44:58.609164    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:44:58.654356    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:44:58.654356    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.211859    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:01.236949    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:01.268645    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.268645    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:01.273856    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:01.305336    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.305336    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:01.309133    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:01.339056    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.339056    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:01.343432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:01.373802    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.373802    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:01.378587    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:01.408624    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.408624    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:01.414210    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:01.446499    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.446499    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:01.450189    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:01.479782    7944 logs.go:282] 0 containers: []
	W1217 00:45:01.479782    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:01.479782    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:01.479829    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:01.526819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:01.526819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:01.591797    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:01.591797    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:01.624206    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:01.624206    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:01.713187    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:01.701188   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.703402   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.704627   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.705600   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:01.706926   30199 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:01.713187    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:01.713187    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.261443    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:04.286201    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:04.315610    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.315610    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:04.319607    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:04.348007    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.348007    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:04.351825    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:04.378854    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.378854    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:04.382430    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:04.414385    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.414385    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:04.419751    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:04.447734    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.447734    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:04.452650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:04.483414    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.483414    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:04.488519    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:04.520173    7944 logs.go:282] 0 containers: []
	W1217 00:45:04.520173    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:04.520173    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:04.520173    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:04.583573    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:04.583573    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:04.615102    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:04.615102    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:04.703186    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:04.693374   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.694566   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.695324   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.698221   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:04.699360   30336 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:04.703186    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:04.703186    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:04.745696    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:04.745696    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:07.302305    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:07.327138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:07.357072    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.357072    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:07.361245    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:07.393135    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.393135    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:07.397020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:07.426598    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.426623    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:07.430259    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:07.459216    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.459216    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:07.463233    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:07.491206    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.491206    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:07.496432    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:07.527082    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.527082    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:07.530080    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:07.563609    7944 logs.go:282] 0 containers: []
	W1217 00:45:07.563609    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:07.563609    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:07.563609    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:07.624175    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:07.624175    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:07.654046    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:07.655373    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:07.733760    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:07.724686   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.725828   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.726798   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.727878   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:07.729852   30483 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:07.733760    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:07.733760    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:07.775826    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:07.775826    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.333009    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:10.359433    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:10.394281    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.394281    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:10.399772    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:10.431921    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.431921    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:10.435941    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:10.466929    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.466929    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:10.469952    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:10.500979    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.500979    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:10.504132    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:10.532972    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.532972    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:10.536526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:10.565609    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.565609    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:10.569307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:10.597263    7944 logs.go:282] 0 containers: []
	W1217 00:45:10.597263    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:10.597263    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:10.597263    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:10.625496    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:10.625496    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:10.716452    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:10.706137   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.707571   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.709046   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.710674   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:10.711932   30627 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:10.716452    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:10.716535    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:10.757898    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:10.757898    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:10.807685    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:10.807685    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.376757    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:13.401022    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:13.433179    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.433179    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:13.438943    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:13.466315    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.466315    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:13.469406    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:13.498170    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.498170    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:13.503463    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:13.531045    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.531045    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:13.534623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:13.563549    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.563572    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:13.567173    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:13.595412    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.595412    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:13.599138    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:13.627347    7944 logs.go:282] 0 containers: []
	W1217 00:45:13.627347    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:13.627347    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:13.627347    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:13.687440    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:13.688440    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:13.718641    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:13.718785    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:13.801949    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:13.792952   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.794106   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.795272   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.796913   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:13.798020   30779 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:13.801949    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:13.801949    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:13.846773    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:13.847288    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.401019    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:16.426837    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:16.461985    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.461985    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:16.465693    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:16.494330    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.494354    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:16.497490    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:16.527742    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.527742    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:16.531287    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:16.561095    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.561095    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:16.564902    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:16.594173    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.594173    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:16.597642    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:16.627598    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.627598    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:16.630884    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:16.659950    7944 logs.go:282] 0 containers: []
	W1217 00:45:16.660031    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:16.660031    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:16.660031    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:16.740660    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:16.730888   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.732344   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.734426   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.736250   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:16.737220   30926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:16.740692    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:16.740692    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:16.782319    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:16.782319    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:16.835245    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:16.835245    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:16.900147    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:16.900147    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.437638    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:19.462468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:19.493244    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.493244    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:19.497367    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:19.526430    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.526430    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:19.530589    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:19.559166    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.559222    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:19.562429    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:19.594311    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.594311    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:19.597936    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:19.627339    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.627339    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:19.632033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:19.659648    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.659648    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:19.663351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:19.696628    7944 logs.go:282] 0 containers: []
	W1217 00:45:19.696628    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:19.696628    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:19.696628    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:19.749701    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:19.749701    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:19.809018    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:19.809018    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:19.838771    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:19.838771    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:19.921290    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:19.910944   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.912216   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.913176   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.916258   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:19.918467   31097 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:19.921290    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:19.921290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.468833    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:22.494625    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:22.526034    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.526034    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:22.529623    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:22.565289    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.565289    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:22.569286    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:22.597280    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.597280    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:22.601010    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:22.630330    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.630330    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:22.634511    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:22.663939    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.663939    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:22.667575    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:22.696762    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.696792    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:22.700137    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:22.732285    7944 logs.go:282] 0 containers: []
	W1217 00:45:22.732285    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:22.732285    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:22.732285    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:22.814702    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:22.805990   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.808311   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.809673   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.810947   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:22.811986   31230 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:22.814702    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:22.814702    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:22.864515    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:22.864515    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:22.917896    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:22.917896    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:22.984213    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:22.984213    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.517090    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:25.542531    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:25.575294    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.575294    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:25.579526    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:25.610041    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.610041    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:25.614160    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:25.643682    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.643712    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:25.647264    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:25.679557    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.679557    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:25.685184    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:25.712791    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.712791    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:25.716775    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:25.747803    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.747803    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:25.751621    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:25.782130    7944 logs.go:282] 0 containers: []
	W1217 00:45:25.782130    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:25.782130    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:25.782130    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:25.833735    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:25.833735    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:25.894476    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:25.894476    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:25.925218    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:25.925218    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:26.009195    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:26.000055   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.001227   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.002238   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.003136   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:26.005907   31409 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:26.009195    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:26.009195    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.558504    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:28.581900    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:28.615041    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.615041    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:28.619020    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:28.647386    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.647386    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:28.651512    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:28.679029    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.679029    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:28.682977    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:28.714035    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.714035    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:28.717407    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:28.746896    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.746920    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:28.749895    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:28.782541    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.782574    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:28.786249    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:28.813250    7944 logs.go:282] 0 containers: []
	W1217 00:45:28.813250    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:28.813250    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:28.813250    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:28.891492    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:28.880764   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.881769   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.882976   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.883809   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:28.886227   31531 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:28.891492    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:28.891492    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:28.934039    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:28.934039    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:28.986066    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:28.986066    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:29.044402    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:29.045400    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.579014    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:31.605723    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:31.639437    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.639437    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:31.643001    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:31.672858    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.672858    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:31.676418    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:31.706815    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.706815    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:31.711450    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:31.739165    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.739165    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:31.742794    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:31.774213    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.774213    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:31.778092    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:31.808021    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.808021    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:31.811911    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:31.841111    7944 logs.go:282] 0 containers: []
	W1217 00:45:31.841174    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:31.841207    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:31.841207    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:31.903600    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:31.903600    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:31.934979    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:31.934979    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:32.016581    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:32.006571   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.007538   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.008919   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.010207   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:32.011489   31692 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:32.016581    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:32.016581    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:32.059137    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:32.059137    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:34.619048    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:34.642906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:34.676541    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.676541    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:34.680839    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:34.710245    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.710245    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:34.715809    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:34.754209    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.754227    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:34.757792    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:34.787283    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.787283    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:34.790335    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:34.823758    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.823758    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:34.827394    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:34.856153    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.856153    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:34.859978    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:34.890024    7944 logs.go:282] 0 containers: []
	W1217 00:45:34.890024    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:34.890024    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:34.890024    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:34.954222    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:34.954222    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:34.985196    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:34.985196    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:35.067666    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:35.054527   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.055553   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.056467   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.060229   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:35.061212   31842 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:35.067666    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:35.067666    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:35.109711    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:35.109711    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:37.664972    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:37.687969    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:37.717956    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.717956    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:37.721553    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:37.750935    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.750935    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:37.755377    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:37.786480    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.786480    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:37.790806    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:37.821246    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.821246    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:37.825408    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:37.854559    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.854559    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:37.858605    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:37.888189    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.888189    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:37.892436    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:37.923454    7944 logs.go:282] 0 containers: []
	W1217 00:45:37.923454    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:37.923454    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:37.923454    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:37.990022    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:37.990022    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:38.021197    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:38.021197    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:38.107061    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:38.096713   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.097911   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.098862   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.100144   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:38.101044   31992 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:38.107061    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:38.107061    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:38.150052    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:38.150052    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:40.710598    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:40.738050    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:40.769637    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.769637    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:40.773468    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:40.810478    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.810478    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:40.814079    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:40.848071    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.848071    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:40.851868    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:40.880725    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.880725    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:40.884928    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:40.915221    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.915221    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:40.919101    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:40.951097    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.951097    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:40.955307    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:40.990856    7944 logs.go:282] 0 containers: []
	W1217 00:45:40.990901    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:40.990901    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:40.990901    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:41.041987    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:41.042028    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:41.104560    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:41.104560    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:41.134782    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:41.134782    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:41.221096    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:41.210697   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.211646   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.214339   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.215988   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:41.217121   32151 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:41.221096    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:41.221096    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:43.768841    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:43.807393    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:43.840153    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.840153    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:43.843740    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:43.873589    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.873589    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:43.877086    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:43.906593    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.906593    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:43.910563    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:43.940004    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.940004    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:43.944461    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:43.984818    7944 logs.go:282] 0 containers: []
	W1217 00:45:43.984818    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:43.988580    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:44.016481    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.016481    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:44.020610    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:44.050198    7944 logs.go:282] 0 containers: []
	W1217 00:45:44.050225    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:44.050225    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:44.050225    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:44.096362    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:44.096362    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:44.150219    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:44.150219    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:44.209135    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:44.209135    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:44.240518    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:44.240518    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:44.328383    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:44.316790   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.317749   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.322292   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.323067   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:44.324563   32302 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:46.833977    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:46.856919    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:46.889480    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.889480    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:46.893215    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:46.924373    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.924373    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:46.928774    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:46.961004    7944 logs.go:282] 0 containers: []
	W1217 00:45:46.961004    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:46.964726    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:47.003673    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.003673    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:47.006719    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:47.040232    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.040232    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:47.044112    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:47.074796    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.074796    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:47.078313    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:47.109819    7944 logs.go:282] 0 containers: []
	W1217 00:45:47.109819    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:47.109819    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:47.109819    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:47.173702    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:47.174703    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:47.204290    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:47.204290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:47.290268    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:47.281079   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.282388   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.283451   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.284976   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:47.285968   32436 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:47.290268    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:47.290268    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:47.332308    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:47.332308    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:49.890367    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:49.913613    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:49.943685    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.943685    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:49.947685    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:49.975458    7944 logs.go:282] 0 containers: []
	W1217 00:45:49.975458    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:49.979401    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:50.010709    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.010709    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:50.014179    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:50.046146    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.046146    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:50.050033    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:50.082525    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.082525    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:50.085833    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:50.113901    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.113943    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:50.117783    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:50.148202    7944 logs.go:282] 0 containers: []
	W1217 00:45:50.148290    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:50.148290    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:50.148290    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:50.208056    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:50.208056    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:50.239113    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:50.239113    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:50.326281    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:50.316567   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.317935   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.319862   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.321021   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:50.322100   32589 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:50.326281    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:50.326281    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:50.369080    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:50.369080    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:52.932111    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:52.956351    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 00:45:52.989854    7944 logs.go:282] 0 containers: []
	W1217 00:45:52.989854    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:45:52.995118    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 00:45:53.022557    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.022557    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:45:53.027906    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 00:45:53.062035    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.062035    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:45:53.065640    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 00:45:53.096245    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.096245    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:45:53.100861    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 00:45:53.131945    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.131945    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:45:53.135650    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 00:45:53.164825    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.164825    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:45:53.168602    7944 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 00:45:53.198961    7944 logs.go:282] 0 containers: []
	W1217 00:45:53.198961    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:45:53.198961    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:45:53.198961    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:45:53.260266    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:45:53.260266    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:45:53.290682    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:45:53.290682    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:45:53.375669    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:45:53.366817   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.367661   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.370028   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.371310   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:45:53.372461   32738 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:45:53.375669    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:45:53.375669    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:45:53.416110    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:45:53.416110    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 00:45:55.971979    7944 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 00:45:55.991052    7944 kubeadm.go:602] duration metric: took 4m3.9896216s to restartPrimaryControlPlane
	W1217 00:45:55.991052    7944 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 00:45:55.996485    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:45:56.479923    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:45:56.502762    7944 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 00:45:56.518662    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:45:56.523597    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:45:56.536371    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:45:56.536371    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:45:56.541198    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:45:56.554668    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:45:56.559154    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:45:56.576197    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:45:56.590283    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:45:56.594634    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:45:56.612520    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.626118    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:45:56.631259    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:45:56.648494    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:45:56.661811    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:45:56.665826    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:45:56.684539    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:45:56.809159    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:45:56.895277    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:45:56.990840    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:49:57.581295    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 00:49:57.581442    7944 kubeadm.go:319] 
	I1217 00:49:57.581498    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:49:57.586513    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:49:57.586513    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:49:57.587141    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:49:57.587141    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:49:57.587141    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:49:57.587666    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:49:57.587767    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:49:57.588407    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:49:57.588470    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:49:57.589479    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:49:57.589618    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:49:57.589771    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:49:57.589895    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:49:57.589957    7944 kubeadm.go:319] OS: Linux
	I1217 00:49:57.590117    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:49:57.590205    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:49:57.590329    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:49:57.590849    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:49:57.591066    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:49:57.591250    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:49:57.591469    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:49:57.591654    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:49:57.594374    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:49:57.594967    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:49:57.595930    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:49:57.595930    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:49:57.595930    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:49:57.598936    7944 out.go:252]   - Booting up control plane ...
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:49:57.598936    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:49:57.599930    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:49:57.599930    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001130665s
	I1217 00:49:57.599930    7944 kubeadm.go:319] 
	I1217 00:49:57.599930    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:49:57.599930    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:49:57.600944    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:49:57.600944    7944 kubeadm.go:319] 
	I1217 00:49:57.601093    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:49:57.601093    7944 kubeadm.go:319] 
	W1217 00:49:57.601093    7944 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001130665s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 00:49:57.606482    7944 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 00:49:58.061133    7944 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 00:49:58.080059    7944 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 00:49:58.085171    7944 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 00:49:58.098234    7944 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 00:49:58.098234    7944 kubeadm.go:158] found existing configuration files:
	
	I1217 00:49:58.102655    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf
	I1217 00:49:58.116544    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 00:49:58.121754    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 00:49:58.141782    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf
	I1217 00:49:58.155836    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 00:49:58.159790    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 00:49:58.177864    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.192169    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 00:49:58.196436    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 00:49:58.213653    7944 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf
	I1217 00:49:58.227417    7944 kubeadm.go:164] "https://control-plane.minikube.internal:8441" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8441 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 00:49:58.231893    7944 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 00:49:58.251588    7944 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 00:49:58.366677    7944 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 00:49:58.451159    7944 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 00:49:58.548545    7944 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 00:53:59.244804    7944 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 00:53:59.244874    7944 kubeadm.go:319] 
	I1217 00:53:59.245013    7944 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 00:53:59.252131    7944 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 00:53:59.252131    7944 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 00:53:59.252131    7944 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 00:53:59.253316    7944 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 00:53:59.253422    7944 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 00:53:59.253492    7944 kubeadm.go:319] CONFIG_INET: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 00:53:59.254063    7944 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 00:53:59.254641    7944 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 00:53:59.255258    7944 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 00:53:59.255381    7944 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 00:53:59.255513    7944 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 00:53:59.255633    7944 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 00:53:59.255694    7944 kubeadm.go:319] OS: Linux
	I1217 00:53:59.255790    7944 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 00:53:59.255877    7944 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 00:53:59.255998    7944 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 00:53:59.256094    7944 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 00:53:59.256215    7944 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 00:53:59.256364    7944 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 00:53:59.256426    7944 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 00:53:59.256548    7944 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 00:53:59.256670    7944 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 00:53:59.256888    7944 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 00:53:59.257050    7944 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 00:53:59.257070    7944 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 00:53:59.257070    7944 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 00:53:59.272325    7944 out.go:252]   - Generating certificates and keys ...
	I1217 00:53:59.272325    7944 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 00:53:59.273020    7944 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 00:53:59.273353    7944 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 00:53:59.273480    7944 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 00:53:59.273606    7944 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 00:53:59.273733    7944 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 00:53:59.273865    7944 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 00:53:59.274056    7944 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 00:53:59.274182    7944 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 00:53:59.274309    7944 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 00:53:59.274434    7944 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 00:53:59.274560    7944 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 00:53:59.274685    7944 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 00:53:59.274812    7944 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 00:53:59.274938    7944 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 00:53:59.275063    7944 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 00:53:59.277866    7944 out.go:252]   - Booting up control plane ...
	I1217 00:53:59.277866    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 00:53:59.278506    7944 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 00:53:59.279071    7944 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 00:53:59.279865    7944 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 00:53:59.280054    7944 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 00:53:59.280189    7944 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000873338s
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is not running
	I1217 00:53:59.280189    7944 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 00:53:59.280189    7944 kubeadm.go:319] 
	I1217 00:53:59.280189    7944 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 00:53:59.280712    7944 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 00:53:59.280785    7944 kubeadm.go:319] 
	I1217 00:53:59.280785    7944 kubeadm.go:403] duration metric: took 12m7.3287248s to StartCluster
	I1217 00:53:59.280785    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 00:53:59.285017    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 00:53:59.529112    7944 cri.go:89] found id: ""
	I1217 00:53:59.529112    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.529112    7944 logs.go:284] No container was found matching "kube-apiserver"
	I1217 00:53:59.529112    7944 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 00:53:59.533754    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 00:53:59.574863    7944 cri.go:89] found id: ""
	I1217 00:53:59.574863    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.574863    7944 logs.go:284] No container was found matching "etcd"
	I1217 00:53:59.574863    7944 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 00:53:59.579181    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 00:53:59.620688    7944 cri.go:89] found id: ""
	I1217 00:53:59.620688    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.620688    7944 logs.go:284] No container was found matching "coredns"
	I1217 00:53:59.620688    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 00:53:59.627987    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 00:53:59.676059    7944 cri.go:89] found id: ""
	I1217 00:53:59.676114    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.676114    7944 logs.go:284] No container was found matching "kube-scheduler"
	I1217 00:53:59.676114    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 00:53:59.680719    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 00:53:59.723707    7944 cri.go:89] found id: ""
	I1217 00:53:59.723707    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.723707    7944 logs.go:284] No container was found matching "kube-proxy"
	I1217 00:53:59.723707    7944 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 00:53:59.729555    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 00:53:59.774476    7944 cri.go:89] found id: ""
	I1217 00:53:59.774476    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.774560    7944 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 00:53:59.774560    7944 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 00:53:59.780477    7944 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 00:53:59.820909    7944 cri.go:89] found id: ""
	I1217 00:53:59.820909    7944 logs.go:282] 0 containers: []
	W1217 00:53:59.820909    7944 logs.go:284] No container was found matching "kindnet"
	I1217 00:53:59.820909    7944 logs.go:123] Gathering logs for kubelet ...
	I1217 00:53:59.820909    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 00:53:59.893583    7944 logs.go:123] Gathering logs for dmesg ...
	I1217 00:53:59.893583    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 00:53:59.926154    7944 logs.go:123] Gathering logs for describe nodes ...
	I1217 00:53:59.926154    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 00:54:00.179462    7944 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 00:54:00.169127   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.170223   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.171927   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.173016   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:54:00.174482   40781 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 00:54:00.179462    7944 logs.go:123] Gathering logs for Docker ...
	I1217 00:54:00.179462    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 00:54:00.221875    7944 logs.go:123] Gathering logs for container status ...
	I1217 00:54:00.221875    7944 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 00:54:00.281055    7944 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 00:54:00.281122    7944 out.go:285] * 
	W1217 00:54:00.281210    7944 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.281448    7944 out.go:285] * 
	W1217 00:54:00.283315    7944 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 00:54:00.296133    7944 out.go:203] 
	W1217 00:54:00.298699    7944 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000873338s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 00:54:00.299289    7944 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 00:54:00.299350    7944 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 00:54:00.301526    7944 out.go:203] 
	
	
	==> Docker <==
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799347277Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799352978Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799377780Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799412283Z" level=info msg="Initializing buildkit"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.911073637Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918044834Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918252552Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918284354Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918293455Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:48 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:41:49 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:41:49 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:56:15.207785   44010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.208700   44010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.210701   44010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.213254   44010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:56:15.214614   44010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:41] CPU: 8 PID: 65919 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000795] RIP: 0033:0x7fc513f26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fc513f26af6.
	[  +0.000661] RSP: 002b:00007ffce9a430e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000957] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000792] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000787] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001172] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001257] FS:  0000000000000000 GS:  0000000000000000
	[  +0.952455] CPU: 6 PID: 66046 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000828] RIP: 0033:0x7f7de767eb20
	[  +0.000402] Code: Unable to access opcode bytes at RIP 0x7f7de767eaf6.
	[  +0.000691] RSP: 002b:00007ffdccfc39b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000866] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000810] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001071] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001218] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001105] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001100] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:56:15 up  1:15,  0 user,  load average: 0.55, 0.40, 0.45
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:56:11 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:12 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 497.
	Dec 17 00:56:12 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:12 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:12 functional-409700 kubelet[43849]: E1217 00:56:12.429512   43849 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:12 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:12 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:13 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 498.
	Dec 17 00:56:13 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:13 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:13 functional-409700 kubelet[43860]: E1217 00:56:13.175819   43860 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:13 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:13 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:13 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 499.
	Dec 17 00:56:13 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:13 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:13 functional-409700 kubelet[43874]: E1217 00:56:13.930093   43874 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:13 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:13 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:56:14 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 500.
	Dec 17 00:56:14 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:14 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:56:14 functional-409700 kubelet[43900]: E1217 00:56:14.692108   43900 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:56:14 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:56:14 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (578.9395ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MySQL (23.84s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-409700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
functional_test.go:234: (dbg) Non-zero exit: kubectl --context functional-409700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'": exit status 1 (50.3336635s)

                                                
                                                
** stderr ** 
	E1217 00:56:51.833928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:01.916159   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:11.954327   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:21.996928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:32.036400   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:236: failed to 'kubectl get nodes' with args "kubectl --context functional-409700 get nodes --output=go-template \"--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'\"": exit status 1
functional_test.go:242: expected to have label "minikube.k8s.io/commit" in node labels but got : 
** stderr ** 
	E1217 00:56:51.833928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:01.916159   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:11.954327   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:21.996928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:32.036400   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/version" in node labels but got : 
** stderr ** 
	E1217 00:56:51.833928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:01.916159   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:11.954327   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:21.996928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:32.036400   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/updated_at" in node labels but got : 
** stderr ** 
	E1217 00:56:51.833928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:01.916159   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:11.954327   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:21.996928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:32.036400   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/name" in node labels but got : 
** stderr ** 
	E1217 00:56:51.833928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:01.916159   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:11.954327   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:21.996928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:32.036400   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
functional_test.go:242: expected to have label "minikube.k8s.io/primary" in node labels but got : 
** stderr ** 
	E1217 00:56:51.833928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:01.916159   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:11.954327   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:21.996928   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	E1217 00:57:32.036400   10104 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://127.0.0.1:56622/api?timeout=32s\": EOF"
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect functional-409700
helpers_test.go:244: (dbg) docker inspect functional-409700:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de",
	        "Created": "2025-12-17T00:24:05.223199249Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 43007,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T00:24:05.522288836Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hostname",
	        "HostsPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/hosts",
	        "LogPath": "/var/lib/docker/containers/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de/ee5097ea8c4b02ab5ad5b87837c934c861307eb937d10192dc8afd180e3cf1de-json.log",
	        "Name": "/functional-409700",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-409700:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-409700",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 4294967296,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/merged",
	                "UpperDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/diff",
	                "WorkDir": "/var/lib/docker/overlay2/706d78709ecdb14080208644d09e87656412f6d5b3f4efde8e7d27bcab930a2c/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-409700",
	                "Source": "/var/lib/docker/volumes/functional-409700/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-409700",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-409700",
	                "name.minikube.sigs.k8s.io": "functional-409700",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6e875b43ca920e8e90c82b8f1c4d2b0999a57d980ebe17c6406f45a4ccb58168",
	            "SandboxKey": "/var/run/docker/netns/6e875b43ca92",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56623"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56619"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56620"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56621"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "56622"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-409700": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "ee1b2722ed4e503e063723d4c0c00abc99d4e57387b6e181156511528a5a0896",
	                    "EndpointID": "42fbe7a4b084643a92cc2b6c93734665bcde06afb5eef9fe47b1c8f2757b2d71",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-409700",
	                        "ee5097ea8c4b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p functional-409700 -n functional-409700: exit status 2 (565.0319ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs -n 25: (1.0608437s)
helpers_test.go:261: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                                           ARGS                                                                            │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ license        │                                                                                                                                                           │ minikube          │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image save kicbase/echo-server:functional-409700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image rm kicbase/echo-server:functional-409700 --alsologtostderr                                                                        │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image ls                                                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ image          │ functional-409700 image save --daemon kicbase/echo-server:functional-409700 --alsologtostderr                                                             │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:56 UTC │ 17 Dec 25 00:56 UTC │
	│ start          │ -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ start          │ -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0                                       │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ start          │ -p functional-409700 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0                                                 │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-409700 --alsologtostderr -v=1                                                                                            │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ update-context │ functional-409700 update-context --alsologtostderr -v=2                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ update-context │ functional-409700 update-context --alsologtostderr -v=2                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ update-context │ functional-409700 update-context --alsologtostderr -v=2                                                                                                   │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ image          │ functional-409700 image ls --format short --alsologtostderr                                                                                               │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ image          │ functional-409700 image ls --format yaml --alsologtostderr                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │ 17 Dec 25 00:57 UTC │
	│ ssh            │ functional-409700 ssh pgrep buildkitd                                                                                                                     │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ image          │ functional-409700 image build -t localhost/my-image:functional-409700 testdata\build --alsologtostderr                                                    │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	│ image          │ functional-409700 image ls --format json --alsologtostderr                                                                                                │ functional-409700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:57 UTC │                     │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:57:29
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:57:29.312362   13608 out.go:360] Setting OutFile to fd 1036 ...
	I1217 00:57:29.401841   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:29.401841   13608 out.go:374] Setting ErrFile to fd 1776...
	I1217 00:57:29.401841   13608 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:29.414842   13608 out.go:368] Setting JSON to false
	I1217 00:57:29.416844   13608 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4637,"bootTime":1765928411,"procs":193,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:57:29.416844   13608 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:57:29.420835   13608 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:57:29.424836   13608 notify.go:221] Checking for updates...
	I1217 00:57:29.426837   13608 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:57:29.428844   13608 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:57:29.430846   13608 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:57:29.432842   13608 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:57:29.435843   13608 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:57:29.165357   10540 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:57:29.165357   10540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:57:29.278361   10540 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:57:29.282363   10540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:29.529841   10540 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-17 00:57:29.506866483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:29.532834   10540 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:57:29.535840   10540 start.go:309] selected driver: docker
	I1217 00:57:29.535840   10540 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:29.535840   10540 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:57:29.589847   10540 out.go:203] 
	W1217 00:57:29.591839   10540 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:57:29.593846   10540 out.go:203] 
	I1217 00:57:29.437835   13608 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:57:29.438844   13608 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:57:29.580837   13608 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:57:29.583837   13608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:29.817352   13608 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:57:29.796553997 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:29.820347   13608 out.go:179] * Using the docker driver based on existing profile
	I1217 00:57:29.823346   13608 start.go:309] selected driver: docker
	I1217 00:57:29.823346   13608 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:29.823346   13608 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:57:29.829348   13608 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:30.066976   13608 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:57:30.047165036 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:30.101067   13608 cni.go:84] Creating CNI manager for ""
	I1217 00:57:30.101067   13608 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:57:30.101067   13608 start.go:353] cluster config:
	{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDN
SLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:30.105229   13608 out.go:179] * dry-run validation complete!
	
	
	==> Docker <==
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799377780Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.799412283Z" level=info msg="Initializing buildkit"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.911073637Z" level=info msg="Completed buildkit initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918044834Z" level=info msg="Daemon has completed initialization"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918252552Z" level=info msg="API listen on [::]:2376"
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918284354Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 00:41:48 functional-409700 dockerd[21759]: time="2025-12-17T00:41:48.918293455Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopping cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:48 functional-409700 systemd[1]: cri-docker.service: Deactivated successfully.
	Dec 17 00:41:48 functional-409700 systemd[1]: Stopped cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:41:49 functional-409700 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Loaded network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 00:41:49 functional-409700 cri-dockerd[22081]: time="2025-12-17T00:41:49Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 00:41:49 functional-409700 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	Dec 17 00:57:33 functional-409700 dockerd[21759]: 2025/12/17 00:57:33 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	Dec 17 00:57:33 functional-409700 dockerd[21759]: 2025/12/17 00:57:33 http2: server: error reading preface from client @: read unix /var/run/docker.sock->@: read: connection reset by peer
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 00:57:33.604126   46557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:33.604939   46557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:33.607548   46557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:33.608810   46557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	E1217 00:57:33.609759   46557 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8441/api?timeout=32s\": dial tcp [::1]:8441: connect: connection refused"
	The connection to the server localhost:8441 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.001333] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001212] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001083] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000810] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000879] FS:  0000000000000000 GS:  0000000000000000
	[Dec17 00:41] CPU: 8 PID: 65919 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000795] RIP: 0033:0x7fc513f26b20
	[  +0.000396] Code: Unable to access opcode bytes at RIP 0x7fc513f26af6.
	[  +0.000661] RSP: 002b:00007ffce9a430e0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000957] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000792] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000787] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001172] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001280] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001257] FS:  0000000000000000 GS:  0000000000000000
	[  +0.952455] CPU: 6 PID: 66046 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000828] RIP: 0033:0x7f7de767eb20
	[  +0.000402] Code: Unable to access opcode bytes at RIP 0x7f7de767eaf6.
	[  +0.000691] RSP: 002b:00007ffdccfc39b0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000866] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000810] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.001071] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.001218] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.001105] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.001100] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 00:57:33 up  1:16,  0 user,  load average: 0.49, 0.44, 0.46
	Linux functional-409700 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 00:57:30 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:31 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 602.
	Dec 17 00:57:31 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:31 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:31 functional-409700 kubelet[46149]: E1217 00:57:31.188094   46149 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:31 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:31 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:31 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 603.
	Dec 17 00:57:31 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:31 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:31 functional-409700 kubelet[46242]: E1217 00:57:31.920902   46242 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:31 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:31 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:32 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 604.
	Dec 17 00:57:32 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:32 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:32 functional-409700 kubelet[46329]: E1217 00:57:32.652206   46329 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:32 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:32 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 00:57:33 functional-409700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 605.
	Dec 17 00:57:33 functional-409700 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:33 functional-409700 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 00:57:33 functional-409700 kubelet[46513]: E1217 00:57:33.456610   46513 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 00:57:33 functional-409700 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 00:57:33 functional-409700 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p functional-409700 -n functional-409700: exit status 2 (573.8889ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "functional-409700" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NodeLabels (52.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-409700 create deployment hello-node --image kicbase/echo-server
functional_test.go:1451: (dbg) Non-zero exit: kubectl --context functional-409700 create deployment hello-node --image kicbase/echo-server: exit status 1 (108.0504ms)

                                                
                                                
** stderr ** 
	error: failed to create deployment: Post "https://127.0.0.1:56622/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create&fieldValidation=Strict": EOF

                                                
                                                
** /stderr **
functional_test.go:1453: failed to create hello-node deployment with this command "kubectl --context functional-409700 create deployment hello-node --image kicbase/echo-server": exit status 1.
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/DeployApp (0.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 service list
functional_test.go:1469: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 service list: exit status 103 (474.7417ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-409700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-409700"

                                                
                                                
-- /stdout --
functional_test.go:1471: failed to do service list. args "out/minikube-windows-amd64.exe -p functional-409700 service list" : exit status 103
functional_test.go:1474: expected 'service list' to contain *hello-node* but got -"* The control-plane node functional-409700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-409700\"\n"-
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 service list -o json
functional_test.go:1499: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 service list -o json: exit status 103 (495.9949ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-409700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-409700"

                                                
                                                
-- /stdout --
functional_test.go:1501: failed to list services with json format. args "out/minikube-windows-amd64.exe -p functional-409700 service list -o json": exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 service --namespace=default --https --url hello-node: exit status 103 (539.5925ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-409700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-409700"

                                                
                                                
-- /stdout --
functional_test.go:1521: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-409700 service --namespace=default --https --url hello-node" : exit status 103
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/HTTPS (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:190: tunnel command failed with unexpected error: exit code 103. stderr: I1217 00:55:22.369614    4296 out.go:360] Setting OutFile to fd 1868 ...
I1217 00:55:22.451608    4296 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:55:22.451608    4296 out.go:374] Setting ErrFile to fd 2028...
I1217 00:55:22.451608    4296 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:55:22.462600    4296 mustload.go:66] Loading cluster: functional-409700
I1217 00:55:22.463611    4296 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:55:22.470618    4296 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
I1217 00:55:22.524599    4296 host.go:66] Checking if "functional-409700" exists ...
I1217 00:55:22.528601    4296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8441/tcp") 0).HostPort}}'" functional-409700
I1217 00:55:22.581610    4296 api_server.go:166] Checking apiserver status ...
I1217 00:55:22.585604    4296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
I1217 00:55:22.589608    4296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
I1217 00:55:22.640614    4296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
W1217 00:55:22.764006    4296 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
stdout:

                                                
                                                
stderr:
I1217 00:55:22.767257    4296 out.go:179] * The control-plane node functional-409700 apiserver is not running: (state=Stopped)
I1217 00:55:22.772019    4296 out.go:179]   To start a cluster, run: "minikube start -p functional-409700"

                                                
                                                
stdout: * The control-plane node functional-409700 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-409700"
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: read stdout failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: read stderr failed: read |0: file already closed
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr] stderr:
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr] ...
helpers_test.go:520: unable to terminate pid 9940: Access is denied.
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr] stdout:
functional_test_tunnel_test.go:194: (dbg) [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr] stderr:
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/RunSecondTunnel (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.2s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-409700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:212: (dbg) Non-zero exit: kubectl --context functional-409700 apply -f testdata\testsvc.yaml: exit status 1 (20.1970895s)

                                                
                                                
** stderr ** 
	error: error validating "testdata\\testsvc.yaml": error validating data: failed to download openapi: Get "https://127.0.0.1:56622/openapi/v2?timeout=32s": EOF; if you choose to ignore these errors, turn validation off with --validate=false

                                                
                                                
** /stderr **
functional_test_tunnel_test.go:214: kubectl --context functional-409700 apply -f testdata\testsvc.yaml failed: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/WaitService/Setup (20.20s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 service hello-node --url --format={{.IP}}: exit status 103 (517.7213ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-409700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-409700"

                                                
                                                
-- /stdout --
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-409700 service hello-node --url --format={{.IP}}": exit status 103
functional_test.go:1558: "* The control-plane node functional-409700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-409700\"" is not a valid IP
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/Format (0.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 service hello-node --url: exit status 103 (456.0483ms)

                                                
                                                
-- stdout --
	* The control-plane node functional-409700 apiserver is not running: (state=Stopped)
	  To start a cluster, run: "minikube start -p functional-409700"

                                                
                                                
-- /stdout --
functional_test.go:1571: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-409700 service hello-node --url": exit status 103
functional_test.go:1575: found endpoint for hello-node: * The control-plane node functional-409700 apiserver is not running: (state=Stopped)
To start a cluster, run: "minikube start -p functional-409700"
functional_test.go:1579: failed to parse "* The control-plane node functional-409700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-409700\"": parse "* The control-plane node functional-409700 apiserver is not running: (state=Stopped)\n  To start a cluster, run: \"minikube start -p functional-409700\"": net/url: invalid control character in URL
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ServiceCmd/URL (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-409700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-409700"
functional_test.go:514: (dbg) Non-zero exit: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-409700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-409700": exit status 1 (2.8021587s)

                                                
                                                
-- stdout --
	functional-409700
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	docker-env: in-use
	

                                                
                                                
-- /stdout --
functional_test.go:520: failed to do status after eval-ing docker-env. error: exit status 1
--- FAIL: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DockerEnv/powershell (2.80s)

                                                
                                    
x
+
TestKubernetesUpgrade (876.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-228200 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker
E1217 01:40:22.375719    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:40:33.741446    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:222: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-228200 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker: (1m35.8806513s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-228200
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-228200: (2.8863302s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-228200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-228200 status --format={{.Host}}: exit status 7 (244.0794ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-228200 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:243: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-228200 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker: exit status 109 (12m38.2098191s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-228200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "kubernetes-upgrade-228200" primary control-plane node in "kubernetes-upgrade-228200" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:41:17.926935    5332 out.go:360] Setting OutFile to fd 1812 ...
	I1217 01:41:17.972246    5332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:41:17.972246    5332 out.go:374] Setting ErrFile to fd 1468...
	I1217 01:41:17.972246    5332 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:41:17.986246    5332 out.go:368] Setting JSON to false
	I1217 01:41:17.988914    5332 start.go:133] hostinfo: {"hostname":"minikube4","uptime":7266,"bootTime":1765928411,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:41:17.988993    5332 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:41:17.992399    5332 out.go:179] * [kubernetes-upgrade-228200] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:41:17.995619    5332 notify.go:221] Checking for updates...
	I1217 01:41:17.997273    5332 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:41:17.998848    5332 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:41:18.003498    5332 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:41:18.011690    5332 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:41:18.013845    5332 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:41:18.016886    5332 config.go:182] Loaded profile config "kubernetes-upgrade-228200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1217 01:41:18.018030    5332 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:41:18.139113    5332 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:41:18.143798    5332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:41:18.410702    5332 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:41:18.387479179 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:41:18.414702    5332 out.go:179] * Using the docker driver based on existing profile
	I1217 01:41:18.416703    5332 start.go:309] selected driver: docker
	I1217 01:41:18.417704    5332 start.go:927] validating driver "docker" against &{Name:kubernetes-upgrade-228200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:kubernetes-upgrade-228200 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:41:18.417704    5332 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:41:18.460858    5332 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:41:18.707198    5332 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:99 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:41:18.680377509 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:41:18.707237    5332 cni.go:84] Creating CNI manager for ""
	I1217 01:41:18.707237    5332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:41:18.707237    5332 start.go:353] cluster config:
	{Name:kubernetes-upgrade-228200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain
:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:41:18.710615    5332 out.go:179] * Starting "kubernetes-upgrade-228200" primary control-plane node in "kubernetes-upgrade-228200" cluster
	I1217 01:41:18.715391    5332 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:41:18.729892    5332 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:41:18.732558    5332 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:41:18.732558    5332 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:41:18.732558    5332 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 01:41:18.732558    5332 cache.go:65] Caching tarball of preloaded images
	I1217 01:41:18.732558    5332 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:41:18.732558    5332 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 01:41:18.733553    5332 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\config.json ...
	I1217 01:41:18.815548    5332 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:41:18.815548    5332 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:41:18.815647    5332 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:41:18.815758    5332 start.go:360] acquireMachinesLock for kubernetes-upgrade-228200: {Name:mk64d1e38c6062034af1504b3f3cc95fb53b98ae Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:41:18.815846    5332 start.go:364] duration metric: took 42.1µs to acquireMachinesLock for "kubernetes-upgrade-228200"
	I1217 01:41:18.815846    5332 start.go:96] Skipping create...Using existing machine configuration
	I1217 01:41:18.815846    5332 fix.go:54] fixHost starting: 
	I1217 01:41:18.827495    5332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-228200 --format={{.State.Status}}
	I1217 01:41:18.904082    5332 fix.go:112] recreateIfNeeded on kubernetes-upgrade-228200: state=Stopped err=<nil>
	W1217 01:41:18.904082    5332 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 01:41:18.906079    5332 out.go:252] * Restarting existing docker container for "kubernetes-upgrade-228200" ...
	I1217 01:41:18.911088    5332 cli_runner.go:164] Run: docker start kubernetes-upgrade-228200
	I1217 01:41:19.451545    5332 cli_runner.go:164] Run: docker container inspect kubernetes-upgrade-228200 --format={{.State.Status}}
	I1217 01:41:19.513841    5332 kic.go:430] container "kubernetes-upgrade-228200" state is running.
	I1217 01:41:19.518835    5332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-228200
	I1217 01:41:19.572832    5332 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\config.json ...
	I1217 01:41:19.573835    5332 machine.go:94] provisionDockerMachine start ...
	I1217 01:41:19.576842    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:19.629843    5332 main.go:143] libmachine: Using SSH client type: native
	I1217 01:41:19.630834    5332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60993 <nil> <nil>}
	I1217 01:41:19.630834    5332 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:41:19.632840    5332 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 01:41:22.803501    5332 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-228200
	
	I1217 01:41:22.803501    5332 ubuntu.go:182] provisioning hostname "kubernetes-upgrade-228200"
	I1217 01:41:22.806504    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:22.867500    5332 main.go:143] libmachine: Using SSH client type: native
	I1217 01:41:22.867500    5332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60993 <nil> <nil>}
	I1217 01:41:22.867500    5332 main.go:143] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-228200 && echo "kubernetes-upgrade-228200" | sudo tee /etc/hostname
	I1217 01:41:23.061257    5332 main.go:143] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-228200
	
	I1217 01:41:23.066263    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:23.125685    5332 main.go:143] libmachine: Using SSH client type: native
	I1217 01:41:23.126698    5332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60993 <nil> <nil>}
	I1217 01:41:23.126698    5332 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-228200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-228200/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-228200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:41:23.313464    5332 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:41:23.313464    5332 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:41:23.314463    5332 ubuntu.go:190] setting up certificates
	I1217 01:41:23.314463    5332 provision.go:84] configureAuth start
	I1217 01:41:23.317465    5332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-228200
	I1217 01:41:23.376469    5332 provision.go:143] copyHostCerts
	I1217 01:41:23.376469    5332 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:41:23.376469    5332 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:41:23.376469    5332 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:41:23.377471    5332 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:41:23.377471    5332 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:41:23.377471    5332 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:41:23.378473    5332 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:41:23.378473    5332 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:41:23.378473    5332 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:41:23.379473    5332 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-228200 san=[127.0.0.1 192.168.76.2 kubernetes-upgrade-228200 localhost minikube]
	I1217 01:41:23.419459    5332 provision.go:177] copyRemoteCerts
	I1217 01:41:23.422458    5332 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:41:23.426459    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:23.493598    5332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60993 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-228200\id_rsa Username:docker}
	I1217 01:41:23.629704    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:41:23.666322    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I1217 01:41:23.698589    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:41:23.731943    5332 provision.go:87] duration metric: took 417.4742ms to configureAuth
	I1217 01:41:23.731943    5332 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:41:23.732542    5332 config.go:182] Loaded profile config "kubernetes-upgrade-228200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:41:23.737762    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:23.803457    5332 main.go:143] libmachine: Using SSH client type: native
	I1217 01:41:23.803457    5332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60993 <nil> <nil>}
	I1217 01:41:23.803457    5332 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:41:23.974459    5332 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:41:23.974459    5332 ubuntu.go:71] root file system type: overlay
	I1217 01:41:23.974459    5332 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:41:23.978458    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:24.032658    5332 main.go:143] libmachine: Using SSH client type: native
	I1217 01:41:24.033656    5332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60993 <nil> <nil>}
	I1217 01:41:24.033656    5332 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:41:24.228741    5332 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:41:24.240623    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:24.303496    5332 main.go:143] libmachine: Using SSH client type: native
	I1217 01:41:24.303496    5332 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 60993 <nil> <nil>}
	I1217 01:41:24.303496    5332 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:41:24.490429    5332 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:41:24.490429    5332 machine.go:97] duration metric: took 4.9165245s to provisionDockerMachine
	I1217 01:41:24.490429    5332 start.go:293] postStartSetup for "kubernetes-upgrade-228200" (driver="docker")
	I1217 01:41:24.490429    5332 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:41:24.496466    5332 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:41:24.500261    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:24.552756    5332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60993 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-228200\id_rsa Username:docker}
	I1217 01:41:24.686604    5332 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:41:24.695916    5332 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:41:24.695959    5332 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:41:24.696004    5332 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 01:41:24.696004    5332 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 01:41:24.696699    5332 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 01:41:24.702014    5332 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:41:24.717510    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 01:41:24.743505    5332 start.go:296] duration metric: took 253.0716ms for postStartSetup
	I1217 01:41:24.747505    5332 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:41:24.750507    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:24.805504    5332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60993 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-228200\id_rsa Username:docker}
	I1217 01:41:24.940657    5332 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:41:24.950930    5332 fix.go:56] duration metric: took 6.1349585s for fixHost
	I1217 01:41:24.950984    5332 start.go:83] releasing machines lock for "kubernetes-upgrade-228200", held for 6.1350514s
	I1217 01:41:24.955847    5332 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubernetes-upgrade-228200
	I1217 01:41:25.019956    5332 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 01:41:25.023957    5332 ssh_runner.go:195] Run: cat /version.json
	I1217 01:41:25.023957    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:25.026956    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:25.084971    5332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60993 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-228200\id_rsa Username:docker}
	I1217 01:41:25.084971    5332 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:60993 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\kubernetes-upgrade-228200\id_rsa Username:docker}
	W1217 01:41:25.202513    5332 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 01:41:25.219344    5332 ssh_runner.go:195] Run: systemctl --version
	I1217 01:41:25.233629    5332 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:41:25.245580    5332 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:41:25.253319    5332 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:41:25.269534    5332 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 01:41:25.269534    5332 start.go:496] detecting cgroup driver to use...
	I1217 01:41:25.269534    5332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:41:25.269534    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:41:25.297776    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:41:25.315777    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:41:25.329784    5332 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:41:25.333773    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 01:41:25.348786    5332 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 01:41:25.348786    5332 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 01:41:25.356780    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:41:25.383723    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:41:25.403738    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:41:25.421720    5332 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:41:25.441723    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:41:25.470271    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:41:25.495856    5332 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:41:25.515489    5332 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:41:25.531492    5332 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:41:25.549491    5332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:41:25.723552    5332 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:41:25.880476    5332 start.go:496] detecting cgroup driver to use...
	I1217 01:41:25.880476    5332 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:41:25.885472    5332 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:41:25.913525    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:41:25.940896    5332 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:41:25.996767    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:41:26.022239    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:41:26.043638    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:41:26.077598    5332 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:41:26.090576    5332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:41:26.104348    5332 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 01:41:26.131586    5332 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:41:26.294765    5332 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:41:26.443009    5332 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:41:26.443009    5332 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:41:26.475452    5332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:41:26.509317    5332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:41:26.637734    5332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:41:27.770389    5332 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.1326388s)
	I1217 01:41:27.775426    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:41:27.801233    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:41:27.827619    5332 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 01:41:27.854831    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:41:27.879648    5332 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:41:28.028118    5332 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:41:28.184831    5332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:41:28.342969    5332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:41:28.375846    5332 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:41:28.402268    5332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:41:28.550345    5332 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:41:28.692987    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:41:28.713633    5332 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:41:28.719360    5332 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:41:28.727751    5332 start.go:564] Will wait 60s for crictl version
	I1217 01:41:28.731749    5332 ssh_runner.go:195] Run: which crictl
	I1217 01:41:28.743766    5332 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:41:28.795447    5332 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:41:28.799786    5332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:41:28.849103    5332 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:41:28.895348    5332 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 01:41:28.898344    5332 cli_runner.go:164] Run: docker exec -t kubernetes-upgrade-228200 dig +short host.docker.internal
	I1217 01:41:29.034609    5332 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:41:29.038607    5332 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:41:29.047603    5332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:41:29.068605    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:29.129676    5332 kubeadm.go:884] updating cluster {Name:kubernetes-upgrade-228200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228200 Namespace:default APIServerHAVIP: APISe
rverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:41:29.129676    5332 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:41:29.132685    5332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:41:29.167802    5332 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:41:29.167852    5332 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1217 01:41:29.171228    5332 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1217 01:41:29.188202    5332 ssh_runner.go:195] Run: which lz4
	I1217 01:41:29.200209    5332 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I1217 01:41:29.207377    5332 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I1217 01:41:29.207528    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (284622240 bytes)
	I1217 01:41:32.986683    5332 docker.go:655] duration metric: took 3.7904254s to copy over tarball
	I1217 01:41:32.991648    5332 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I1217 01:41:36.306483    5332 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.3147878s)
	I1217 01:41:36.306483    5332 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I1217 01:41:36.328289    5332 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I1217 01:41:36.341308    5332 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2660 bytes)
	I1217 01:41:36.365759    5332 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:41:36.388540    5332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:41:36.544963    5332 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:41:43.863051    5332 ssh_runner.go:235] Completed: sudo systemctl restart docker: (7.3179836s)
	I1217 01:41:43.866020    5332 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:41:43.902316    5332 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:41:43.902316    5332 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:41:43.902316    5332 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 01:41:43.902931    5332 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-228200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:41:43.907150    5332 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:41:43.991741    5332 cni.go:84] Creating CNI manager for ""
	I1217 01:41:43.991741    5332 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:41:43.991741    5332 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:41:43.991741    5332 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-228200 NodeName:kubernetes-upgrade-228200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:41:43.991741    5332 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-228200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:41:43.995863    5332 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:41:44.052385    5332 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:41:44.057984    5332 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:41:44.222608    5332 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I1217 01:41:44.303475    5332 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:41:44.330462    5332 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2233 bytes)
	I1217 01:41:44.359345    5332 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:41:44.366354    5332 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:41:44.389886    5332 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:41:44.550349    5332 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:41:44.594977    5332 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200 for IP: 192.168.76.2
	I1217 01:41:44.594977    5332 certs.go:195] generating shared ca certs ...
	I1217 01:41:44.594977    5332 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:41:44.595969    5332 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:41:44.595969    5332 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:41:44.595969    5332 certs.go:257] generating profile certs ...
	I1217 01:41:44.596981    5332 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\client.key
	I1217 01:41:44.596981    5332 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\apiserver.key.51269b83
	I1217 01:41:44.597964    5332 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\proxy-client.key
	I1217 01:41:44.598964    5332 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:41:44.598964    5332 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:41:44.598964    5332 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:41:44.598964    5332 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:41:44.598964    5332 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:41:44.599970    5332 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:41:44.599970    5332 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:41:44.600978    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:41:44.769938    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:41:44.797941    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:41:44.829939    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:41:44.874955    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I1217 01:41:44.907945    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:41:44.939976    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:41:44.986678    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\kubernetes-upgrade-228200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1217 01:41:45.015696    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:41:45.047693    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:41:45.091374    5332 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:41:45.126368    5332 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:41:45.152461    5332 ssh_runner.go:195] Run: openssl version
	I1217 01:41:45.167473    5332 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:41:45.186467    5332 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:41:45.208465    5332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:41:45.216483    5332 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:41:45.220460    5332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:41:45.274464    5332 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:41:45.291465    5332 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:41:45.308467    5332 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:41:45.324465    5332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:41:45.332466    5332 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:41:45.336473    5332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:41:45.418035    5332 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:41:45.465032    5332 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:41:45.518264    5332 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:41:45.535429    5332 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:41:45.543443    5332 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:41:45.550430    5332 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:41:45.639676    5332 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:41:45.657685    5332 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:41:45.670681    5332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 01:41:45.723676    5332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 01:41:45.785945    5332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 01:41:45.834952    5332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 01:41:45.893985    5332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 01:41:45.945997    5332 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 01:41:46.001627    5332 kubeadm.go:401] StartCluster: {Name:kubernetes-upgrade-228200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:kubernetes-upgrade-228200 Namespace:default APIServerHAVIP: APIServe
rName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwa
rePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:41:46.006970    5332 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:41:46.045631    5332 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:41:46.062819    5332 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 01:41:46.062819    5332 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 01:41:46.072249    5332 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 01:41:46.087264    5332 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:41:46.091253    5332 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" kubernetes-upgrade-228200
	I1217 01:41:46.150001    5332 kubeconfig.go:47] verify endpoint returned: get endpoint: "kubernetes-upgrade-228200" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:41:46.150866    5332 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "kubernetes-upgrade-228200" cluster setting kubeconfig missing "kubernetes-upgrade-228200" context setting]
	I1217 01:41:46.151734    5332 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:41:46.171941    5332 kapi.go:59] client config for kubernetes-upgrade-228200: &rest.Config{Host:"https://127.0.0.1:60997", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-228200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-228200/client.key", CAFile:"C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyD
ata:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x7ff6bb499080), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1217 01:41:46.173105    5332 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1217 01:41:46.173133    5332 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1217 01:41:46.173159    5332 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1217 01:41:46.173159    5332 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1217 01:41:46.173159    5332 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1217 01:41:46.177456    5332 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 01:41:46.195481    5332 kubeadm.go:645] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml	2025-12-17 01:40:49.923206785 +0000
	+++ /var/tmp/minikube/kubeadm.yaml.new	2025-12-17 01:41:44.345686095 +0000
	@@ -1,4 +1,4 @@
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	   advertiseAddress: 192.168.76.2
	@@ -14,31 +14,34 @@
	   criSocket: unix:///var/run/cri-dockerd.sock
	   name: "kubernetes-upgrade-228200"
	   kubeletExtraArgs:
	-    node-ip: 192.168.76.2
	+    - name: "node-ip"
	+      value: "192.168.76.2"
	   taints: []
	 ---
	-apiVersion: kubeadm.k8s.io/v1beta3
	+apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	   certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	   extraArgs:
	-    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	+    - name: "enable-admission-plugins"
	+      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	 controllerManager:
	   extraArgs:
	-    allocate-node-cidrs: "true"
	-    leader-elect: "false"
	+    - name: "allocate-node-cidrs"
	+      value: "true"
	+    - name: "leader-elect"
	+      value: "false"
	 scheduler:
	   extraArgs:
	-    leader-elect: "false"
	+    - name: "leader-elect"
	+      value: "false"
	 certificatesDir: /var/lib/minikube/certs
	 clusterName: mk
	 controlPlaneEndpoint: control-plane.minikube.internal:8443
	 etcd:
	   local:
	     dataDir: /var/lib/minikube/etcd
	-    extraArgs:
	-      proxy-refresh-interval: "70000"
	-kubernetesVersion: v1.28.0
	+kubernetesVersion: v1.35.0-beta.0
	 networking:
	   dnsDomain: cluster.local
	   podSubnet: "10.244.0.0/16"
	
	-- /stdout --
	I1217 01:41:46.195481    5332 kubeadm.go:1161] stopping kube-system containers ...
	I1217 01:41:46.199713    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:41:46.238168    5332 docker.go:484] Stopping containers: [ce404ec360fa d872d44b86a8 2684c1bc7d48 078cba2c262b a108e25d97af 06187964ef53 528c4625cabe b532734d2061]
	I1217 01:41:46.244163    5332 ssh_runner.go:195] Run: docker stop ce404ec360fa d872d44b86a8 2684c1bc7d48 078cba2c262b a108e25d97af 06187964ef53 528c4625cabe b532734d2061
	I1217 01:41:46.288014    5332 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I1217 01:41:46.314012    5332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:41:46.327008    5332 kubeadm.go:158] found existing configuration files:
	-rw------- 1 root root 5639 Dec 17 01:40 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5652 Dec 17 01:40 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Dec 17 01:41 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Dec 17 01:40 /etc/kubernetes/scheduler.conf
	
	I1217 01:41:46.330011    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:41:46.350023    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:41:46.368444    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:41:46.381015    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:41:46.385013    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:41:46.406201    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:41:46.419428    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I1217 01:41:46.422427    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:41:46.441604    5332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:41:46.461038    5332 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:41:46.538955    5332 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:41:47.175682    5332 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:41:47.425384    5332 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:41:47.500757    5332 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I1217 01:41:47.592835    5332 api_server.go:52] waiting for apiserver process to appear ...
	I1217 01:41:47.597326    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:48.097929    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:48.597034    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:49.096834    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:49.598495    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:50.097257    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:50.598420    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:51.099402    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:51.597796    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:52.098007    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:52.597049    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:53.098837    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:53.598778    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:54.098389    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:54.597756    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:55.097828    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:55.598116    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:56.098207    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:56.599708    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:57.099110    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:57.600715    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:58.098291    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:58.596494    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:59.099288    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:41:59.597949    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:00.096415    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:00.598035    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:01.096935    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:01.600798    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:02.098458    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:02.598536    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:03.097483    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:03.597232    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:04.097525    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:04.598279    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:05.096590    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:05.597220    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:06.100695    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:06.598070    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:07.098277    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:07.599776    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:08.096620    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:08.598734    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:09.097613    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:09.598679    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:10.097730    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:10.598031    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:11.098228    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:11.597572    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:12.097918    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:12.598361    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:13.098734    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:13.597349    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:14.098309    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:14.598025    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:15.100033    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:15.598744    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:16.100053    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:16.599389    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:17.099091    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:17.598031    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:18.097779    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:18.599028    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:19.098590    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:19.599407    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:20.098342    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:20.597140    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:21.100316    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:21.596483    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:22.098872    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:22.598914    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:23.097654    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:23.598686    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:24.097890    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:24.600239    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:25.099202    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:25.599720    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:26.098133    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:26.596843    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:27.098606    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:27.600409    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:28.098751    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:28.598106    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:29.098346    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:29.596952    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:30.097054    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:30.599048    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:31.098777    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:31.598831    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:32.099003    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:32.600853    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:33.097528    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:33.598338    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:34.099010    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:34.597840    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:35.099290    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:35.597930    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:36.098630    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:36.597978    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:37.097855    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:37.631050    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:38.098782    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:38.599808    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:39.098310    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:39.597781    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:40.098288    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:40.598521    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:41.099531    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:41.597534    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:42.098941    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:42.596764    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:43.098306    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:43.598750    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:44.098379    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:44.598180    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:45.097470    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:45.599508    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:46.098102    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:46.597380    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:47.097000    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:47.597436    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:42:47.634153    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:42:47.638743    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:42:47.671886    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:42:47.675886    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:42:47.707885    5332 logs.go:282] 0 containers: []
	W1217 01:42:47.707885    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:42:47.710884    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:42:47.752888    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:42:47.756887    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:42:47.787905    5332 logs.go:282] 0 containers: []
	W1217 01:42:47.787905    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:42:47.791885    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:42:47.819885    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:42:47.823906    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:42:47.851889    5332 logs.go:282] 0 containers: []
	W1217 01:42:47.851889    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:42:47.854890    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:42:47.894465    5332 logs.go:282] 0 containers: []
	W1217 01:42:47.894465    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:42:47.894465    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:42:47.894465    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:42:47.935058    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:42:47.935058    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:42:47.997057    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:42:47.997057    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:42:48.033596    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:42:48.033658    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:42:48.125812    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:42:48.125812    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:42:48.125812    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:42:48.177286    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:42:48.177286    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:42:48.216588    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:42:48.216588    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:42:48.286059    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:42:48.286059    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:42:48.336787    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:42:48.336787    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:42:50.898774    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:50.918777    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:42:50.949776    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:42:50.952800    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:42:50.983840    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:42:50.988150    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:42:51.019180    5332 logs.go:282] 0 containers: []
	W1217 01:42:51.019180    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:42:51.022177    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:42:51.057185    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:42:51.060185    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:42:51.094207    5332 logs.go:282] 0 containers: []
	W1217 01:42:51.094207    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:42:51.097198    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:42:51.126208    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:42:51.129214    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:42:51.160204    5332 logs.go:282] 0 containers: []
	W1217 01:42:51.161206    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:42:51.164204    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:42:51.194503    5332 logs.go:282] 0 containers: []
	W1217 01:42:51.194503    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:42:51.194503    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:42:51.194503    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:42:51.263667    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:42:51.263667    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:42:51.304854    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:42:51.304854    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:42:51.403606    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:42:51.403606    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:42:51.403606    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:42:51.441588    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:42:51.441588    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:42:51.506369    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:42:51.506369    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:42:51.565361    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:42:51.565361    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:42:51.606356    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:42:51.606356    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:42:51.648371    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:42:51.648371    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:42:54.205604    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:54.230111    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:42:54.265709    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:42:54.268949    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:42:54.300830    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:42:54.304710    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:42:54.335718    5332 logs.go:282] 0 containers: []
	W1217 01:42:54.335718    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:42:54.339658    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:42:54.375065    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:42:54.381120    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:42:54.414766    5332 logs.go:282] 0 containers: []
	W1217 01:42:54.414766    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:42:54.418400    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:42:54.459824    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:42:54.462825    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:42:54.501221    5332 logs.go:282] 0 containers: []
	W1217 01:42:54.501221    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:42:54.504214    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:42:54.556847    5332 logs.go:282] 0 containers: []
	W1217 01:42:54.556847    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:42:54.556847    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:42:54.556847    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:42:54.642408    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:42:54.642443    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:42:54.642443    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:42:54.690900    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:42:54.691901    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:42:54.737880    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:42:54.737880    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:42:54.775271    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:42:54.775271    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:42:54.824780    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:42:54.825345    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:42:54.864292    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:42:54.864292    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:42:54.907889    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:42:54.907889    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:42:55.032471    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:42:55.032471    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:42:57.661014    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:42:57.687409    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:42:57.724782    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:42:57.728785    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:42:57.764782    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:42:57.768783    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:42:57.804779    5332 logs.go:282] 0 containers: []
	W1217 01:42:57.804779    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:42:57.807774    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:42:57.837791    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:42:57.841784    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:42:57.873777    5332 logs.go:282] 0 containers: []
	W1217 01:42:57.874779    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:42:57.877780    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:42:57.918346    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:42:57.921347    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:42:57.956172    5332 logs.go:282] 0 containers: []
	W1217 01:42:57.956249    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:42:57.962271    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:42:57.993627    5332 logs.go:282] 0 containers: []
	W1217 01:42:57.993671    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:42:57.993728    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:42:57.993728    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:42:58.079503    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:42:58.079503    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:42:58.136898    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:42:58.136898    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:42:58.187911    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:42:58.187911    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:42:58.233900    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:42:58.233900    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:42:58.267947    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:42:58.267947    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:42:58.322889    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:42:58.322889    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:42:58.359872    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:42:58.359872    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:42:58.462131    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:42:58.462131    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:42:58.462131    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:01.011948    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:01.037402    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:01.070218    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:01.074153    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:01.114536    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:01.118540    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:01.150524    5332 logs.go:282] 0 containers: []
	W1217 01:43:01.150524    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:01.154521    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:01.183528    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:01.187526    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:01.231535    5332 logs.go:282] 0 containers: []
	W1217 01:43:01.231535    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:01.234531    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:01.263538    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:01.267535    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:01.296161    5332 logs.go:282] 0 containers: []
	W1217 01:43:01.296161    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:01.300155    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:01.331155    5332 logs.go:282] 0 containers: []
	W1217 01:43:01.331155    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:01.331155    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:01.331155    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:01.394155    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:01.394155    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:01.435379    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:01.435379    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:01.526699    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:01.526763    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:01.526794    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:01.574696    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:01.574744    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:01.621196    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:01.621196    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:01.653670    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:01.653706    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:01.710536    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:01.710586    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:01.762967    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:01.762967    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:04.307958    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:04.330081    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:04.359821    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:04.362952    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:04.399810    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:04.403428    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:04.433216    5332 logs.go:282] 0 containers: []
	W1217 01:43:04.433216    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:04.438905    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:04.471440    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:04.477666    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:04.507986    5332 logs.go:282] 0 containers: []
	W1217 01:43:04.507986    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:04.511939    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:04.543105    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:04.547095    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:04.579732    5332 logs.go:282] 0 containers: []
	W1217 01:43:04.579732    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:04.583771    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:04.615268    5332 logs.go:282] 0 containers: []
	W1217 01:43:04.615268    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:04.615268    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:04.615268    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:04.668839    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:04.669832    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:04.709805    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:04.709805    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:04.748001    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:04.748001    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:04.840622    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:04.841612    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:04.841612    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:04.894563    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:04.894563    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:04.935987    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:04.935987    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:04.992020    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:04.992020    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:05.061018    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:05.061018    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:07.615445    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:07.671445    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:07.704201    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:07.710467    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:07.742485    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:07.746482    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:07.782477    5332 logs.go:282] 0 containers: []
	W1217 01:43:07.782477    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:07.785465    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:07.821919    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:07.825339    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:07.860714    5332 logs.go:282] 0 containers: []
	W1217 01:43:07.860714    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:07.863707    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:07.901551    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:07.905486    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:07.936208    5332 logs.go:282] 0 containers: []
	W1217 01:43:07.936208    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:07.940220    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:07.973204    5332 logs.go:282] 0 containers: []
	W1217 01:43:07.973204    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:07.973204    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:07.973204    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:08.038214    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:08.038214    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:08.087999    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:08.087999    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:08.129674    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:08.129674    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:08.172996    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:08.172996    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:08.228383    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:08.228383    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:08.265334    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:08.265334    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:08.362593    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:08.362593    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:08.362593    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:08.397177    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:08.397177    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:10.934606    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:10.960562    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:10.992462    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:10.995950    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:11.030559    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:11.034203    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:11.063700    5332 logs.go:282] 0 containers: []
	W1217 01:43:11.063700    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:11.066970    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:11.101759    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:11.105276    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:11.134407    5332 logs.go:282] 0 containers: []
	W1217 01:43:11.134407    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:11.138216    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:11.170656    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:11.173925    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:11.204165    5332 logs.go:282] 0 containers: []
	W1217 01:43:11.204165    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:11.207916    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:11.240789    5332 logs.go:282] 0 containers: []
	W1217 01:43:11.240789    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:11.240789    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:11.240789    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:11.288547    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:11.288572    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:11.333067    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:11.333067    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:11.373996    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:11.373996    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:11.463607    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:11.463607    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:11.463607    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:11.510988    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:11.510988    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:11.541993    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:11.541993    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:11.593866    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:11.593866    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:11.654994    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:11.654994    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:14.200694    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:14.227999    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:14.260748    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:14.264722    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:14.297126    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:14.300201    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:14.325048    5332 logs.go:282] 0 containers: []
	W1217 01:43:14.326071    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:14.329946    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:14.364192    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:14.368312    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:14.397720    5332 logs.go:282] 0 containers: []
	W1217 01:43:14.397720    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:14.400924    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:14.433405    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:14.437758    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:14.466221    5332 logs.go:282] 0 containers: []
	W1217 01:43:14.466260    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:14.470268    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:14.499203    5332 logs.go:282] 0 containers: []
	W1217 01:43:14.499288    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:14.499288    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:14.499288    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:14.553582    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:14.553582    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:14.592389    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:14.592389    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:14.658619    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:14.658619    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:14.700831    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:14.700831    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:14.798432    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:14.798432    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:14.798432    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:14.847875    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:14.847875    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:14.885514    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:14.885514    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:14.942921    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:14.943459    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:17.489936    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:17.513944    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:17.556964    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:17.561964    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:17.594976    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:17.598969    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:17.631976    5332 logs.go:282] 0 containers: []
	W1217 01:43:17.631976    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:17.637974    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:17.730798    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:17.737038    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:17.772024    5332 logs.go:282] 0 containers: []
	W1217 01:43:17.772024    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:17.777051    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:17.809641    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:17.812628    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:17.846662    5332 logs.go:282] 0 containers: []
	W1217 01:43:17.846662    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:17.851654    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:17.884647    5332 logs.go:282] 0 containers: []
	W1217 01:43:17.884647    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:17.884647    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:17.884647    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:17.937092    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:17.937092    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:17.995106    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:17.995106    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:18.041548    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:18.041548    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:18.163535    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:18.163535    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:18.163535    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:18.236766    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:18.236766    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:18.281751    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:18.281751    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:18.352610    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:18.352610    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:18.442614    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:18.442614    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:21.000841    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:21.024862    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:21.060389    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:21.063383    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:21.092083    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:21.096084    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:21.127128    5332 logs.go:282] 0 containers: []
	W1217 01:43:21.127128    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:21.130134    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:21.159138    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:21.162126    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:21.196135    5332 logs.go:282] 0 containers: []
	W1217 01:43:21.196135    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:21.199131    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:21.233726    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:21.236725    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:21.268740    5332 logs.go:282] 0 containers: []
	W1217 01:43:21.268740    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:21.271732    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:21.300892    5332 logs.go:282] 0 containers: []
	W1217 01:43:21.300892    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:21.300892    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:21.300892    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:21.339130    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:21.339130    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:21.389891    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:21.389891    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:21.433829    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:21.433829    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:21.468208    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:21.468208    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:21.573031    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:21.573031    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:21.573031    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:21.619365    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:21.619365    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:21.659348    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:21.659348    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:21.728504    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:21.728593    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:24.301021    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:24.326681    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:24.364451    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:24.368450    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:24.396448    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:24.399448    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:24.451164    5332 logs.go:282] 0 containers: []
	W1217 01:43:24.451164    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:24.456943    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:24.487867    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:24.491850    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:24.523155    5332 logs.go:282] 0 containers: []
	W1217 01:43:24.523155    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:24.527090    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:24.560397    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:24.563964    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:24.591999    5332 logs.go:282] 0 containers: []
	W1217 01:43:24.591999    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:24.596249    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:24.629156    5332 logs.go:282] 0 containers: []
	W1217 01:43:24.629156    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:24.629156    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:24.629156    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:24.696037    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:24.696037    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:24.749917    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:24.749917    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:24.791013    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:24.791013    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:24.830543    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:24.830543    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:24.868150    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:24.868150    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:24.920239    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:24.920239    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:24.958007    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:24.958007    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:25.058537    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:25.058537    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:25.058537    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:27.608761    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:27.631257    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:27.667300    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:27.671381    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:27.705734    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:27.709727    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:27.739137    5332 logs.go:282] 0 containers: []
	W1217 01:43:27.739137    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:27.743156    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:27.772134    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:27.776140    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:27.804145    5332 logs.go:282] 0 containers: []
	W1217 01:43:27.804145    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:27.807146    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:27.837824    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:27.840822    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:27.868360    5332 logs.go:282] 0 containers: []
	W1217 01:43:27.868360    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:27.871278    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:27.901692    5332 logs.go:282] 0 containers: []
	W1217 01:43:27.901692    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:27.901692    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:27.901692    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:27.953440    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:27.953440    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:28.026550    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:28.026550    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:28.072528    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:28.072528    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:28.109678    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:28.109678    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:28.193273    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:28.193273    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:28.290735    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:28.290735    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:28.290735    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:28.330922    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:28.330981    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:28.375152    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:28.375152    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:30.911972    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:30.937238    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:30.972877    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:30.976315    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:31.008239    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:31.012108    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:31.041274    5332 logs.go:282] 0 containers: []
	W1217 01:43:31.041274    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:31.045803    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:31.082167    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:31.086090    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:31.114239    5332 logs.go:282] 0 containers: []
	W1217 01:43:31.114239    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:31.118159    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:31.151634    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:31.158009    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:31.189334    5332 logs.go:282] 0 containers: []
	W1217 01:43:31.189334    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:31.192328    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:31.223132    5332 logs.go:282] 0 containers: []
	W1217 01:43:31.223132    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:31.223132    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:31.223132    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:31.268273    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:31.268273    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:31.324294    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:31.324294    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:31.371826    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:31.371826    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:31.423208    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:31.423208    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:31.492168    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:31.492168    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:31.606131    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:31.606131    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:31.606131    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:31.652135    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:31.652135    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:31.701146    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:31.701146    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:34.244178    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:34.267651    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:34.301694    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:34.305078    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:34.340641    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:34.346609    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:34.376902    5332 logs.go:282] 0 containers: []
	W1217 01:43:34.376902    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:34.381307    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:34.418407    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:34.422405    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:34.461375    5332 logs.go:282] 0 containers: []
	W1217 01:43:34.461375    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:34.465383    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:34.500210    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:34.505513    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:34.547365    5332 logs.go:282] 0 containers: []
	W1217 01:43:34.547365    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:34.551874    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:34.583734    5332 logs.go:282] 0 containers: []
	W1217 01:43:34.583734    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:34.583734    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:34.583734    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:34.619726    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:34.619791    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:34.655635    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:34.655635    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:34.709220    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:34.709275    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:34.782060    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:34.782060    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:34.868552    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:34.868552    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:34.868552    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:34.915547    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:34.915547    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:34.963906    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:34.963906    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:35.005897    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:35.005897    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:37.556346    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:37.581582    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:37.619744    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:37.622743    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:37.662551    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:37.666544    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:37.697751    5332 logs.go:282] 0 containers: []
	W1217 01:43:37.697751    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:37.703350    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:37.739640    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:37.744631    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:37.780676    5332 logs.go:282] 0 containers: []
	W1217 01:43:37.780676    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:37.784672    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:37.813677    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:37.816673    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:37.852628    5332 logs.go:282] 0 containers: []
	W1217 01:43:37.852628    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:37.858725    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:37.888706    5332 logs.go:282] 0 containers: []
	W1217 01:43:37.888706    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:37.888706    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:37.888706    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:37.934166    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:37.934166    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:38.027672    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:38.027720    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:38.027779    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:38.089805    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:38.089805    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:38.126011    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:38.126011    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:38.175007    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:38.175007    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:38.225606    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:38.225606    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:38.275194    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:38.275194    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:38.359342    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:38.359342    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:40.932742    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:40.958441    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:40.995417    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:40.998671    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:41.031142    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:41.036127    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:41.071127    5332 logs.go:282] 0 containers: []
	W1217 01:43:41.071127    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:41.074127    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:41.107233    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:41.110604    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:41.144417    5332 logs.go:282] 0 containers: []
	W1217 01:43:41.144417    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:41.149418    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:41.183511    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:41.187514    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:41.226344    5332 logs.go:282] 0 containers: []
	W1217 01:43:41.226344    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:41.232348    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:41.268264    5332 logs.go:282] 0 containers: []
	W1217 01:43:41.269258    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:41.269258    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:41.269258    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:41.330884    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:41.330884    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:41.382264    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:41.382264    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:41.468852    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:41.468852    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:41.468852    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:41.521241    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:41.522242    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:41.563240    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:41.564243    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:41.597500    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:41.597500    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:41.639400    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:41.639400    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:41.688093    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:41.688093    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:44.248290    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:44.269310    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:44.303388    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:44.306385    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:44.335580    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:44.338576    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:44.366660    5332 logs.go:282] 0 containers: []
	W1217 01:43:44.366660    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:44.371292    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:44.402699    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:44.406307    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:44.435987    5332 logs.go:282] 0 containers: []
	W1217 01:43:44.435987    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:44.440868    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:44.471008    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:44.474472    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:44.504485    5332 logs.go:282] 0 containers: []
	W1217 01:43:44.504485    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:44.507484    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:44.536486    5332 logs.go:282] 0 containers: []
	W1217 01:43:44.536486    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:44.536486    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:44.536486    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:44.601792    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:44.601792    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:44.639144    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:44.639144    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:44.724412    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:44.724493    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:44.724493    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:44.771074    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:44.771074    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:44.824528    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:44.824528    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:44.862438    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:44.862438    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:44.894170    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:44.894170    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:44.941483    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:44.941483    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:47.503994    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:47.526708    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:47.565591    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:47.569386    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:47.603413    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:47.606772    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:47.641488    5332 logs.go:282] 0 containers: []
	W1217 01:43:47.641488    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:47.645569    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:47.677830    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:47.680828    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:47.714979    5332 logs.go:282] 0 containers: []
	W1217 01:43:47.714979    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:47.717972    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:47.752978    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:47.757046    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:47.786290    5332 logs.go:282] 0 containers: []
	W1217 01:43:47.786290    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:47.790283    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:47.819071    5332 logs.go:282] 0 containers: []
	W1217 01:43:47.819071    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:47.819071    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:47.819071    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:47.888159    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:47.888159    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:47.991566    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:47.991566    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:47.991566    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:48.066747    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:48.066747    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:48.119712    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:48.119712    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:48.178659    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:48.178659    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:48.216168    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:48.216168    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:48.266274    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:48.266274    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:48.303599    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:48.303629    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:50.839207    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:50.861651    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:50.898473    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:50.901698    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:50.935037    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:50.938843    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:50.966873    5332 logs.go:282] 0 containers: []
	W1217 01:43:50.966873    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:50.970928    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:51.001707    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:51.004857    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:51.035677    5332 logs.go:282] 0 containers: []
	W1217 01:43:51.035677    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:51.039739    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:51.070920    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:51.075106    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:51.113704    5332 logs.go:282] 0 containers: []
	W1217 01:43:51.113704    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:51.116987    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:51.155503    5332 logs.go:282] 0 containers: []
	W1217 01:43:51.155545    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:51.155545    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:51.155545    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:51.221463    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:51.222465    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:51.266017    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:51.266017    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:51.361072    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:51.361098    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:51.361098    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:51.412753    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:51.412753    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:51.461803    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:51.462345    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:51.506582    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:51.506582    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:51.549072    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:51.549072    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:51.579700    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:51.579700    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:54.158131    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:54.178133    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:54.214215    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:54.217213    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:54.248220    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:54.252221    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:54.281898    5332 logs.go:282] 0 containers: []
	W1217 01:43:54.281898    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:54.285832    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:54.324340    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:54.328260    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:54.358445    5332 logs.go:282] 0 containers: []
	W1217 01:43:54.358445    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:54.361449    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:54.396579    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:54.400683    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:54.442378    5332 logs.go:282] 0 containers: []
	W1217 01:43:54.442378    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:54.446361    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:54.477967    5332 logs.go:282] 0 containers: []
	W1217 01:43:54.477967    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:54.477967    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:54.477967    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:54.545653    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:54.546640    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:54.591732    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:54.592713    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:54.682785    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:43:54.682785    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:54.682785    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:54.745980    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:54.745980    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:54.792172    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:54.792172    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:54.931739    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:54.931739    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:54.990230    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:54.990304    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:55.041398    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:55.041398    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:57.586977    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:43:57.615195    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:43:57.658125    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:43:57.661808    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:43:57.721833    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:43:57.728562    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:43:57.774934    5332 logs.go:282] 0 containers: []
	W1217 01:43:57.774934    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:43:57.779178    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:43:57.829093    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:43:57.834807    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:43:57.879201    5332 logs.go:282] 0 containers: []
	W1217 01:43:57.879288    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:43:57.884519    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:43:57.917833    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:43:57.923056    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:43:57.967229    5332 logs.go:282] 0 containers: []
	W1217 01:43:57.967229    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:43:57.972204    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:43:58.003614    5332 logs.go:282] 0 containers: []
	W1217 01:43:58.003614    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:43:58.003614    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:43:58.003614    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:43:58.055529    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:43:58.055577    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:43:58.105380    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:43:58.105380    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:43:58.146904    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:43:58.146904    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:43:58.217381    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:43:58.217424    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:43:58.309438    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:43:58.309438    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:43:58.349443    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:43:58.349443    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:43:58.406991    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:43:58.407981    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:43:58.454195    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:43:58.454195    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:43:58.565873    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:01.071255    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:01.093168    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:01.126602    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:01.133959    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:01.250140    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:01.254633    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:01.352418    5332 logs.go:282] 0 containers: []
	W1217 01:44:01.352418    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:01.356211    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:01.392210    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:01.396017    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:01.425693    5332 logs.go:282] 0 containers: []
	W1217 01:44:01.425693    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:01.432809    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:01.470521    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:01.474514    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:01.504636    5332 logs.go:282] 0 containers: []
	W1217 01:44:01.504636    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:01.510094    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:01.537565    5332 logs.go:282] 0 containers: []
	W1217 01:44:01.537565    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:01.537565    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:01.537565    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:01.593892    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:01.593892    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:01.680816    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:01.680816    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:01.680816    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:01.722932    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:01.722932    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:01.757772    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:01.757834    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:01.821233    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:01.822229    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:01.862117    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:01.862219    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:01.917980    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:01.917980    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:01.975160    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:01.975160    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:04.517801    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:04.541294    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:04.572363    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:04.575927    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:04.608993    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:04.612871    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:04.641881    5332 logs.go:282] 0 containers: []
	W1217 01:44:04.641881    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:04.647687    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:04.677318    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:04.680613    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:04.712162    5332 logs.go:282] 0 containers: []
	W1217 01:44:04.712211    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:04.715918    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:04.750936    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:04.754212    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:04.787656    5332 logs.go:282] 0 containers: []
	W1217 01:44:04.787656    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:04.792634    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:04.825746    5332 logs.go:282] 0 containers: []
	W1217 01:44:04.825746    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:04.825746    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:04.825746    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:04.915445    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:04.915445    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:04.915445    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:04.972477    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:04.972477    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:05.015023    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:05.015023    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:05.057572    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:05.058118    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:05.104696    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:05.104772    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:05.167362    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:05.167362    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:05.204808    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:05.204808    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:05.264410    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:05.264478    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:07.813803    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:07.837043    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:07.876921    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:07.880728    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:07.920286    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:07.924024    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:07.966813    5332 logs.go:282] 0 containers: []
	W1217 01:44:07.967361    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:07.971303    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:08.005460    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:08.009140    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:08.040032    5332 logs.go:282] 0 containers: []
	W1217 01:44:08.040102    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:08.046013    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:08.088790    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:08.092998    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:08.123569    5332 logs.go:282] 0 containers: []
	W1217 01:44:08.123569    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:08.127511    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:08.161514    5332 logs.go:282] 0 containers: []
	W1217 01:44:08.161514    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:08.161514    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:08.161514    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:08.205367    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:08.205367    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:08.297151    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:08.297151    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:08.297215    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:08.345259    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:08.345259    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:08.381343    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:08.381343    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:08.449889    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:08.449889    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:08.497965    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:08.497965    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:08.543985    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:08.543985    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:08.586210    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:08.586210    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:11.145839    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:11.170186    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:11.204498    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:11.208259    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:11.238901    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:11.242586    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:11.272047    5332 logs.go:282] 0 containers: []
	W1217 01:44:11.272086    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:11.275794    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:11.310552    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:11.313656    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:11.343052    5332 logs.go:282] 0 containers: []
	W1217 01:44:11.343087    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:11.346617    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:11.379757    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:11.383740    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:11.411758    5332 logs.go:282] 0 containers: []
	W1217 01:44:11.411833    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:11.415705    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:11.443138    5332 logs.go:282] 0 containers: []
	W1217 01:44:11.443138    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:11.443138    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:11.443138    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:11.488763    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:11.488763    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:11.532053    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:11.532053    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:11.571532    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:11.572064    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:11.602788    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:11.602788    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:11.655223    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:11.655223    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:11.721160    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:11.721160    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:11.761379    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:11.761379    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:11.848183    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:11.848183    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:11.848183    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:14.397315    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:14.421233    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:14.452489    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:14.455719    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:14.488013    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:14.491977    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:14.519925    5332 logs.go:282] 0 containers: []
	W1217 01:44:14.519925    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:14.523660    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:14.553565    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:14.556391    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:14.585600    5332 logs.go:282] 0 containers: []
	W1217 01:44:14.585674    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:14.590771    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:14.619254    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:14.623135    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:14.651943    5332 logs.go:282] 0 containers: []
	W1217 01:44:14.651943    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:14.656073    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:14.685303    5332 logs.go:282] 0 containers: []
	W1217 01:44:14.685303    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:14.685303    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:14.685303    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:14.723144    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:14.723144    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:14.777449    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:14.777449    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:14.812589    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:14.812589    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:14.900370    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:14.900370    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:14.900370    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:14.939618    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:14.939618    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:14.973839    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:14.973839    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:15.039337    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:15.039337    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:15.089718    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:15.089779    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:17.637826    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:17.668371    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:17.705238    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:17.709145    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:17.740421    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:17.744270    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:17.773573    5332 logs.go:282] 0 containers: []
	W1217 01:44:17.773573    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:17.777486    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:17.807688    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:17.811023    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:17.840896    5332 logs.go:282] 0 containers: []
	W1217 01:44:17.840952    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:17.844278    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:17.876864    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:17.880454    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:17.909479    5332 logs.go:282] 0 containers: []
	W1217 01:44:17.909479    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:17.913395    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:17.945199    5332 logs.go:282] 0 containers: []
	W1217 01:44:17.945199    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:17.945199    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:17.945199    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:18.010852    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:18.010852    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:18.104541    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:18.104541    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:18.104541    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:18.152561    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:18.152561    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:18.196375    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:18.196935    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:18.245374    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:18.245374    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:18.294480    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:18.294480    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:18.331397    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:18.331397    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:18.372279    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:18.372279    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:20.907899    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:20.934386    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:20.974535    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:20.978640    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:21.027937    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:21.033054    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:21.063362    5332 logs.go:282] 0 containers: []
	W1217 01:44:21.063362    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:21.067670    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:21.098613    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:21.102593    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:21.132713    5332 logs.go:282] 0 containers: []
	W1217 01:44:21.132773    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:21.137333    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:21.169285    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:21.172990    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:21.205815    5332 logs.go:282] 0 containers: []
	W1217 01:44:21.205869    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:21.209512    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:21.238784    5332 logs.go:282] 0 containers: []
	W1217 01:44:21.238784    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:21.238784    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:21.238784    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:21.305937    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:21.305937    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:21.401953    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:21.401953    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:21.401953    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:21.449078    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:21.449625    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:21.501580    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:21.501634    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:21.537909    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:21.537909    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:21.594708    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:21.594708    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:21.641056    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:21.641056    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:21.677487    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:21.677554    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:24.216258    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:24.240818    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:24.279719    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:24.283585    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:24.313084    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:24.316074    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:24.358812    5332 logs.go:282] 0 containers: []
	W1217 01:44:24.358812    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:24.362807    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:24.400933    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:24.404164    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:24.437780    5332 logs.go:282] 0 containers: []
	W1217 01:44:24.437780    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:24.443229    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:24.477947    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:24.481671    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:24.517018    5332 logs.go:282] 0 containers: []
	W1217 01:44:24.517018    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:24.520032    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:24.551012    5332 logs.go:282] 0 containers: []
	W1217 01:44:24.551012    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:24.551012    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:24.551012    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:24.593014    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:24.593014    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:24.680040    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:24.680040    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:24.680040    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:24.752436    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:24.752436    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:24.793436    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:24.793436    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:24.837445    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:24.837475    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:24.882453    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:24.882453    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:24.914709    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:24.914709    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:24.979444    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:24.979444    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:27.560304    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:27.584149    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:27.618843    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:27.622794    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:27.657163    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:27.661591    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:27.698138    5332 logs.go:282] 0 containers: []
	W1217 01:44:27.698138    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:27.700846    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:27.746598    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:27.752693    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:27.789241    5332 logs.go:282] 0 containers: []
	W1217 01:44:27.789293    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:27.793513    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:27.830082    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:27.833770    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:27.865253    5332 logs.go:282] 0 containers: []
	W1217 01:44:27.865253    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:27.870268    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:27.906084    5332 logs.go:282] 0 containers: []
	W1217 01:44:27.906084    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:27.906084    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:27.906084    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:27.952658    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:27.952658    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:28.041683    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:28.042216    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:28.042216    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:28.089042    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:28.089042    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:28.135717    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:28.135791    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:28.178145    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:28.178145    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:28.215597    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:28.215597    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:28.246677    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:28.246677    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:28.296850    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:28.296850    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:30.866346    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:30.892252    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:30.927962    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:30.931315    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:30.962634    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:30.965653    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:30.998426    5332 logs.go:282] 0 containers: []
	W1217 01:44:30.998426    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:31.001426    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:31.033004    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:31.036017    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:31.091058    5332 logs.go:282] 0 containers: []
	W1217 01:44:31.091058    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:31.095266    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:31.130747    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:31.133821    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:31.165583    5332 logs.go:282] 0 containers: []
	W1217 01:44:31.165583    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:31.169731    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:31.197920    5332 logs.go:282] 0 containers: []
	W1217 01:44:31.198912    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:31.198912    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:31.198912    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:31.261719    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:31.261719    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:31.300568    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:31.300568    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:31.399761    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:31.399761    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:31.399761    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:31.446149    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:31.446149    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:31.483422    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:31.483422    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:31.546251    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:31.546251    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:31.588053    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:31.588053    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:31.637391    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:31.637391    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:34.184289    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:34.207928    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:34.240135    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:34.244042    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:34.275204    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:34.279142    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:34.308468    5332 logs.go:282] 0 containers: []
	W1217 01:44:34.308468    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:34.311925    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:34.340629    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:34.344102    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:34.372391    5332 logs.go:282] 0 containers: []
	W1217 01:44:34.372391    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:34.377333    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:34.410432    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:34.414358    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:34.442607    5332 logs.go:282] 0 containers: []
	W1217 01:44:34.442607    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:34.445600    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:34.473698    5332 logs.go:282] 0 containers: []
	W1217 01:44:34.473698    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:34.473698    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:34.473698    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:34.539585    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:34.539585    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:34.628192    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:34.628192    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:34.628192    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:34.677855    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:34.677855    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:34.721222    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:34.721282    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:34.774642    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:34.774697    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:34.812644    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:34.812644    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:34.858135    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:34.858135    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:34.898972    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:34.898972    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:37.436311    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:37.459922    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:37.496333    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:37.500165    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:37.532335    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:37.535813    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:37.564822    5332 logs.go:282] 0 containers: []
	W1217 01:44:37.564822    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:37.569152    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:37.602958    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:37.606232    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:37.635164    5332 logs.go:282] 0 containers: []
	W1217 01:44:37.635164    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:37.639659    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:37.668092    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:37.671231    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:37.698198    5332 logs.go:282] 0 containers: []
	W1217 01:44:37.698198    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:37.702406    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:37.731677    5332 logs.go:282] 0 containers: []
	W1217 01:44:37.731677    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:37.731677    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:37.731677    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:37.765428    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:37.765428    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:37.850465    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:37.850465    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:37.850465    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:37.906175    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:37.906175    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:37.950563    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:37.950596    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:37.987744    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:37.987824    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:38.035670    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:38.035670    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:38.096556    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:38.096556    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:38.135010    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:38.135010    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:40.692222    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:40.721740    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:40.753839    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:40.757352    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:40.787478    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:40.790824    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:40.821767    5332 logs.go:282] 0 containers: []
	W1217 01:44:40.821818    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:40.825579    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:40.858069    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:40.861388    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:40.892858    5332 logs.go:282] 0 containers: []
	W1217 01:44:40.892905    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:40.896447    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:40.927945    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:40.931223    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:40.964140    5332 logs.go:282] 0 containers: []
	W1217 01:44:40.964140    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:40.967904    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:41.000864    5332 logs.go:282] 0 containers: []
	W1217 01:44:41.000864    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:41.000864    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:41.000864    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:41.068241    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:41.068241    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:41.108983    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:41.108983    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:41.156388    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:41.156388    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:41.203720    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:41.204693    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:41.294242    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:41.294242    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:41.294242    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:41.338078    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:41.338078    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:41.378447    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:41.378486    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:41.411801    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:41.411801    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:43.966534    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:43.989953    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:44.027872    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:44.032609    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:44.064934    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:44.067934    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:44.097727    5332 logs.go:282] 0 containers: []
	W1217 01:44:44.097727    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:44.101511    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:44.135694    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:44.140698    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:44.173510    5332 logs.go:282] 0 containers: []
	W1217 01:44:44.173510    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:44.178924    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:44.235496    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:44.239508    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:44.269606    5332 logs.go:282] 0 containers: []
	W1217 01:44:44.269606    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:44.272599    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:44.303724    5332 logs.go:282] 0 containers: []
	W1217 01:44:44.303724    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:44.303724    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:44.303724    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:44.374304    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:44.374304    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:44.535612    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:44.535612    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:44.535612    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:44.575598    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:44.575598    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:44.610895    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:44.610895    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:44.651394    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:44.651394    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:44.703410    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:44.703410    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:44.746568    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:44.746568    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:44.782465    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:44.782465    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:47.352027    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:47.371053    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:47.403055    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:47.406058    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:47.443857    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:47.449476    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:47.479683    5332 logs.go:282] 0 containers: []
	W1217 01:44:47.479683    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:47.483677    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:47.516303    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:47.522997    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:47.563825    5332 logs.go:282] 0 containers: []
	W1217 01:44:47.563825    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:47.568470    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:47.600521    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:47.604320    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:47.640463    5332 logs.go:282] 0 containers: []
	W1217 01:44:47.640463    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:47.644471    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:47.677965    5332 logs.go:282] 0 containers: []
	W1217 01:44:47.677965    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:47.677965    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:47.677965    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:47.725209    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:47.725209    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:47.773960    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:47.773960    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:47.824444    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:47.824444    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:47.870399    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:47.870920    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:47.929410    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:47.929410    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:48.012509    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:48.012509    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:48.012509    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:48.056010    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:48.056010    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:48.122973    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:48.122973    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:50.671850    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:50.695830    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:50.734519    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:50.739930    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:50.776096    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:50.779994    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:50.807477    5332 logs.go:282] 0 containers: []
	W1217 01:44:50.807477    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:50.810478    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:50.842083    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:50.845084    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:50.877500    5332 logs.go:282] 0 containers: []
	W1217 01:44:50.877500    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:50.881487    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:50.912305    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:50.916779    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:50.955082    5332 logs.go:282] 0 containers: []
	W1217 01:44:50.955082    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:50.960006    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:50.992718    5332 logs.go:282] 0 containers: []
	W1217 01:44:50.992718    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:50.992718    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:50.992718    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:51.061073    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:51.061073    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:51.102251    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:51.102295    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:51.160739    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:51.160739    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:51.199423    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:51.199423    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:51.285462    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:51.285462    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:51.285462    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:51.329462    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:51.329462    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:51.375428    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:51.375428    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:51.414490    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:51.414550    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:53.969209    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:53.990208    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:54.021178    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:54.028896    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:54.063405    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:54.067353    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:54.096698    5332 logs.go:282] 0 containers: []
	W1217 01:44:54.096698    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:54.100239    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:54.130954    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:54.135645    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:54.170855    5332 logs.go:282] 0 containers: []
	W1217 01:44:54.170855    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:54.174091    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:54.208534    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:54.211354    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:54.245471    5332 logs.go:282] 0 containers: []
	W1217 01:44:54.245471    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:54.249075    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:54.276943    5332 logs.go:282] 0 containers: []
	W1217 01:44:54.276943    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:54.276943    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:54.276943    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:54.359368    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:54.359368    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:54.359893    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:54.392073    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:54.392594    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:54.436785    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:54.436785    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:54.488788    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:54.488788    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:54.529900    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:54.529900    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:54.578128    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:54.578128    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:54.616089    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:54.616160    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:44:54.679279    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:54.679279    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:57.260722    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:44:57.283296    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:44:57.319681    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:44:57.325541    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:44:57.359287    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:44:57.362781    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:44:57.395669    5332 logs.go:282] 0 containers: []
	W1217 01:44:57.395669    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:44:57.399501    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:44:57.432326    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:44:57.436360    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:44:57.466732    5332 logs.go:282] 0 containers: []
	W1217 01:44:57.466732    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:44:57.471088    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:44:57.501313    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:44:57.504717    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:44:57.548283    5332 logs.go:282] 0 containers: []
	W1217 01:44:57.548800    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:44:57.553515    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:44:57.586008    5332 logs.go:282] 0 containers: []
	W1217 01:44:57.586008    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:44:57.586008    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:44:57.586008    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:44:57.659371    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:44:57.659371    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:44:57.771338    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:44:57.771338    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:44:57.771338    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:44:57.817363    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:44:57.817363    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:44:57.865268    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:44:57.865268    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:44:57.905827    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:44:57.905864    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:44:57.942029    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:44:57.942029    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:44:57.982074    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:44:57.982074    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:44:58.025464    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:44:58.025464    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:00.594447    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:00.618243    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:00.652904    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:00.656898    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:00.686494    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:00.692158    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:00.732874    5332 logs.go:282] 0 containers: []
	W1217 01:45:00.732874    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:00.736944    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:00.771041    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:00.776470    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:00.810493    5332 logs.go:282] 0 containers: []
	W1217 01:45:00.810535    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:00.814036    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:00.846268    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:00.851615    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:00.879784    5332 logs.go:282] 0 containers: []
	W1217 01:45:00.879784    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:00.886171    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:00.920522    5332 logs.go:282] 0 containers: []
	W1217 01:45:00.920522    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:00.920522    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:00.920522    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:00.965002    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:00.965002    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:00.999486    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:00.999486    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:01.048490    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:01.049013    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:01.136955    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:01.136955    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:01.136955    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:01.184231    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:01.184231    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:01.224167    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:01.224167    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:01.259463    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:01.259543    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:01.325219    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:01.325219    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:03.871408    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:03.897723    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:03.932239    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:03.936265    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:03.969142    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:03.973709    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:04.004752    5332 logs.go:282] 0 containers: []
	W1217 01:45:04.004752    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:04.008818    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:04.047932    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:04.051621    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:04.083502    5332 logs.go:282] 0 containers: []
	W1217 01:45:04.083502    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:04.087472    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:04.120001    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:04.124800    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:04.154253    5332 logs.go:282] 0 containers: []
	W1217 01:45:04.154253    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:04.157983    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:04.184789    5332 logs.go:282] 0 containers: []
	W1217 01:45:04.184789    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:04.184789    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:04.184789    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:04.234256    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:04.234256    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:04.278530    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:04.278530    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:04.315102    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:04.315102    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:04.357949    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:04.357949    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:04.405952    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:04.406035    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:04.474973    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:04.475014    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:04.556526    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:04.556526    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:04.649374    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:04.649374    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:04.649374    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:07.205290    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:07.227515    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:07.262664    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:07.266123    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:07.297445    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:07.301164    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:07.329908    5332 logs.go:282] 0 containers: []
	W1217 01:45:07.329966    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:07.333453    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:07.365448    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:07.368898    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:07.398441    5332 logs.go:282] 0 containers: []
	W1217 01:45:07.398441    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:07.402108    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:07.436658    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:07.440453    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:07.468156    5332 logs.go:282] 0 containers: []
	W1217 01:45:07.468156    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:07.472149    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:07.503472    5332 logs.go:282] 0 containers: []
	W1217 01:45:07.503472    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:07.503472    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:07.503472    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:07.540259    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:07.540259    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:07.618610    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:07.618610    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:07.618610    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:07.649131    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:07.649131    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:07.701351    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:07.701415    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:07.767815    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:07.767815    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:07.821206    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:07.821946    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:07.865756    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:07.865756    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:07.908623    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:07.908623    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:10.455427    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:10.482354    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:10.511348    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:10.514352    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:10.548602    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:10.552424    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:10.587309    5332 logs.go:282] 0 containers: []
	W1217 01:45:10.587309    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:10.593600    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:10.622694    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:10.629737    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:10.662134    5332 logs.go:282] 0 containers: []
	W1217 01:45:10.662134    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:10.666165    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:10.700997    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:10.704736    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:10.737185    5332 logs.go:282] 0 containers: []
	W1217 01:45:10.737211    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:10.741000    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:10.774613    5332 logs.go:282] 0 containers: []
	W1217 01:45:10.774613    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:10.774613    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:10.774613    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:10.815230    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:10.815230    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:10.864913    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:10.864913    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:10.905979    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:10.905979    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:10.952455    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:10.952455    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:10.988313    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:10.988384    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:11.027654    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:11.027654    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:11.084221    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:11.084221    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:11.153332    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:11.154304    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:11.257350    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:13.762791    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:13.788583    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:13.820804    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:13.824885    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:13.854415    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:13.860769    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:13.893339    5332 logs.go:282] 0 containers: []
	W1217 01:45:13.893339    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:13.899200    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:13.933500    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:13.936511    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:13.967517    5332 logs.go:282] 0 containers: []
	W1217 01:45:13.967517    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:13.971529    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:14.006087    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:14.009091    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:14.049471    5332 logs.go:282] 0 containers: []
	W1217 01:45:14.049471    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:14.052464    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:14.089572    5332 logs.go:282] 0 containers: []
	W1217 01:45:14.089572    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:14.089572    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:14.089572    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:14.135807    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:14.135807    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:14.167802    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:14.167802    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:14.238814    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:14.238814    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:14.278173    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:14.278173    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:14.383298    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:14.383298    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:14.383298    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:14.436315    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:14.437299    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:14.489218    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:14.489218    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:14.530912    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:14.530912    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:17.087114    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:17.316564    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:17.364246    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:17.367580    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:17.410725    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:17.414823    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:17.457802    5332 logs.go:282] 0 containers: []
	W1217 01:45:17.457842    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:17.462529    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:17.503526    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:17.508372    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:17.537713    5332 logs.go:282] 0 containers: []
	W1217 01:45:17.537713    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:17.541635    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:17.570993    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:17.573995    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:17.608003    5332 logs.go:282] 0 containers: []
	W1217 01:45:17.608003    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:17.612210    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:17.647478    5332 logs.go:282] 0 containers: []
	W1217 01:45:17.647543    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:17.647575    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:17.647575    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:17.700057    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:17.700057    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:17.752619    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:17.752619    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:17.860930    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:17.860930    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:17.860930    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:17.912890    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:17.912890    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:17.952893    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:17.952893    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:17.987894    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:17.987894    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:18.052720    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:18.052720    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:18.127664    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:18.128673    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:20.672062    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:20.728347    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:20.763351    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:20.767344    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:20.800338    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:20.804346    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:20.846706    5332 logs.go:282] 0 containers: []
	W1217 01:45:20.846788    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:20.852277    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:20.890936    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:20.893931    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:20.932919    5332 logs.go:282] 0 containers: []
	W1217 01:45:20.932919    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:20.937803    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:20.969863    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:20.973861    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:21.023881    5332 logs.go:282] 0 containers: []
	W1217 01:45:21.024067    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:21.035035    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:21.079059    5332 logs.go:282] 0 containers: []
	W1217 01:45:21.079059    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:21.079059    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:21.079059    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:21.122161    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:21.122161    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:21.245006    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:21.245006    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:21.245006    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:21.301053    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:21.301053    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:21.343799    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:21.343799    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:21.413797    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:21.413797    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:21.488798    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:21.488798    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:21.543046    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:21.543046    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:21.590044    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:21.590044    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:24.143848    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:24.167619    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:24.201037    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:24.204576    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:24.242440    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:24.246579    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:24.275927    5332 logs.go:282] 0 containers: []
	W1217 01:45:24.276017    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:24.279444    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:24.307717    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:24.311123    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:24.343357    5332 logs.go:282] 0 containers: []
	W1217 01:45:24.343357    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:24.347115    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:24.381865    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:24.385824    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:24.418475    5332 logs.go:282] 0 containers: []
	W1217 01:45:24.418475    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:24.422468    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:24.451645    5332 logs.go:282] 0 containers: []
	W1217 01:45:24.451645    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:24.451645    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:24.451645    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:24.523614    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:24.523614    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:24.563982    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:24.563982    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:24.656826    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:24.656826    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:24.656826    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:24.699226    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:24.699797    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:24.745946    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:24.745946    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:24.794503    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:24.794503    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:24.834626    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:24.834626    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:24.870987    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:24.870987    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:27.425274    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:27.449907    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:27.482218    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:27.487755    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:27.521512    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:27.525723    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:27.554767    5332 logs.go:282] 0 containers: []
	W1217 01:45:27.554767    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:27.557777    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:27.591261    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:27.594820    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:27.636903    5332 logs.go:282] 0 containers: []
	W1217 01:45:27.636903    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:27.640510    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:27.673547    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:27.677729    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:27.720966    5332 logs.go:282] 0 containers: []
	W1217 01:45:27.720966    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:27.726322    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:27.754666    5332 logs.go:282] 0 containers: []
	W1217 01:45:27.754666    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:27.754666    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:27.754666    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:27.792835    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:27.792835    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:27.874865    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:27.874900    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:27.874900    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:27.920941    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:27.920941    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:27.968230    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:27.968230    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:28.015618    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:28.015618    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:28.060269    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:28.060269    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:28.091274    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:28.091274    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:28.151881    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:28.151881    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:30.724389    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:30.745726    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:30.782703    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:30.787913    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:30.816659    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:30.820954    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:30.849054    5332 logs.go:282] 0 containers: []
	W1217 01:45:30.849054    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:30.853141    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:30.883985    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:30.887657    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:30.916161    5332 logs.go:282] 0 containers: []
	W1217 01:45:30.916161    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:30.920160    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:30.951203    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:30.955227    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:30.987426    5332 logs.go:282] 0 containers: []
	W1217 01:45:30.987426    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:30.991967    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:31.025430    5332 logs.go:282] 0 containers: []
	W1217 01:45:31.025430    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:31.025430    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:31.025430    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:31.064509    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:31.064509    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:31.101844    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:31.101844    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:31.132696    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:31.132755    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:31.196605    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:31.196605    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:31.284902    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:31.284998    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:31.284998    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:31.333966    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:31.333966    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:31.374812    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:31.374812    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:31.419143    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:31.419143    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:33.976875    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:34.005614    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:34.040454    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:34.044008    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:34.079804    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:34.082667    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:34.112104    5332 logs.go:282] 0 containers: []
	W1217 01:45:34.112174    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:34.115404    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:34.146685    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:34.149822    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:34.179763    5332 logs.go:282] 0 containers: []
	W1217 01:45:34.179855    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:34.183745    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:34.215962    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:34.219195    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:34.249934    5332 logs.go:282] 0 containers: []
	W1217 01:45:34.249934    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:34.254251    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:34.287530    5332 logs.go:282] 0 containers: []
	W1217 01:45:34.287530    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:34.287613    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:34.287613    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:34.326926    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:34.326926    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:34.376887    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:34.376963    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:34.413149    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:34.413149    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:34.515238    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:34.515238    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:34.515238    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:34.570206    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:34.570206    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:34.617726    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:34.617726    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:34.650808    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:34.650808    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:34.719248    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:34.719248    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:37.270603    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:37.294647    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:37.327001    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:37.330753    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:37.365727    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:37.369227    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:37.400008    5332 logs.go:282] 0 containers: []
	W1217 01:45:37.400043    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:37.404332    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:37.438858    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:37.442912    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:37.472050    5332 logs.go:282] 0 containers: []
	W1217 01:45:37.472111    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:37.476013    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:37.506659    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:37.510092    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:37.537640    5332 logs.go:282] 0 containers: []
	W1217 01:45:37.537640    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:37.540974    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:37.571397    5332 logs.go:282] 0 containers: []
	W1217 01:45:37.571397    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:37.571397    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:37.571397    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:37.620955    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:37.621006    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:37.661466    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:37.661532    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:37.708785    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:37.708785    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:37.744884    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:37.744884    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:37.774964    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:37.774964    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:37.842549    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:37.842549    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:37.882463    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:37.882463    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:37.966547    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:37.966604    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:37.966604    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:40.533871    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:40.558646    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:40.592643    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:40.595659    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:40.627797    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:40.631829    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:40.666019    5332 logs.go:282] 0 containers: []
	W1217 01:45:40.666019    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:40.669216    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:40.703888    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:40.710343    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:40.747888    5332 logs.go:282] 0 containers: []
	W1217 01:45:40.747888    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:40.752610    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:40.784122    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:40.789270    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:40.820647    5332 logs.go:282] 0 containers: []
	W1217 01:45:40.820647    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:40.824506    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:40.853395    5332 logs.go:282] 0 containers: []
	W1217 01:45:40.853395    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:40.853395    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:40.853395    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:40.897523    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:40.897523    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:40.989737    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:40.989737    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:40.989737    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:41.035295    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:41.035295    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:41.080033    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:41.080033    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:41.117028    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:41.117028    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:41.171864    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:41.172877    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:41.252098    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:41.252098    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:41.301101    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:41.301101    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:43.880519    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:43.904402    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:43.937705    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:43.941768    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:43.975179    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:43.978704    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:44.009757    5332 logs.go:282] 0 containers: []
	W1217 01:45:44.009757    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:44.013510    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:44.048232    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:44.052222    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:44.086470    5332 logs.go:282] 0 containers: []
	W1217 01:45:44.086470    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:44.090387    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:44.131039    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:44.135483    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:44.179219    5332 logs.go:282] 0 containers: []
	W1217 01:45:44.179219    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:44.183813    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:44.223450    5332 logs.go:282] 0 containers: []
	W1217 01:45:44.223450    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:44.223450    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:44.223450    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:44.270447    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:44.270447    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:44.309447    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:44.309447    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:44.369042    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:44.369042    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:44.417653    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:44.417705    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:44.474767    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:44.474864    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:44.539326    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:44.539326    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:44.574763    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:44.574763    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:44.619021    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:44.620203    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:44.713777    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:47.218553    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:47.243108    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 01:45:47.275006    5332 logs.go:282] 1 containers: [d872d44b86a8]
	I1217 01:45:47.278110    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 01:45:47.306377    5332 logs.go:282] 1 containers: [2684c1bc7d48]
	I1217 01:45:47.309703    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 01:45:47.340045    5332 logs.go:282] 0 containers: []
	W1217 01:45:47.340045    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:45:47.345022    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 01:45:47.383506    5332 logs.go:282] 1 containers: [ce404ec360fa]
	I1217 01:45:47.386908    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 01:45:47.418870    5332 logs.go:282] 0 containers: []
	W1217 01:45:47.418870    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:45:47.422071    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 01:45:47.460511    5332 logs.go:282] 1 containers: [078cba2c262b]
	I1217 01:45:47.463665    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 01:45:47.497090    5332 logs.go:282] 0 containers: []
	W1217 01:45:47.497143    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:45:47.501014    5332 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}
	I1217 01:45:47.541289    5332 logs.go:282] 0 containers: []
	W1217 01:45:47.541289    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:45:47.541289    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:45:47.541289    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:45:47.620837    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:45:47.620837    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:45:47.659796    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:45:47.659796    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:45:47.777284    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:45:47.777284    5332 logs.go:123] Gathering logs for etcd [2684c1bc7d48] ...
	I1217 01:45:47.777284    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 2684c1bc7d48"
	I1217 01:45:47.834369    5332 logs.go:123] Gathering logs for kube-scheduler [ce404ec360fa] ...
	I1217 01:45:47.834369    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 ce404ec360fa"
	I1217 01:45:47.889316    5332 logs.go:123] Gathering logs for kube-controller-manager [078cba2c262b] ...
	I1217 01:45:47.889316    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 078cba2c262b"
	I1217 01:45:47.940277    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:45:47.941273    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:45:47.972834    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:45:47.972895    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 01:45:48.048830    5332 logs.go:123] Gathering logs for kube-apiserver [d872d44b86a8] ...
	I1217 01:45:48.048868    5332 ssh_runner.go:195] Run: /bin/bash -c "docker logs --tail 400 d872d44b86a8"
	I1217 01:45:50.612424    5332 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:45:50.630803    5332 kubeadm.go:602] duration metric: took 4m4.5644936s to restartPrimaryControlPlane
	W1217 01:45:50.630803    5332 out.go:285] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I1217 01:45:50.635827    5332 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 01:45:51.286893    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:45:51.311537    5332 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:45:51.323717    5332 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:45:51.329908    5332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:45:51.342218    5332 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:45:51.342218    5332 kubeadm.go:158] found existing configuration files:
	
	I1217 01:45:51.347046    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:45:51.363532    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:45:51.368757    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:45:51.386283    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:45:51.398749    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:45:51.403877    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:45:51.424199    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:45:51.439667    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:45:51.443536    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:45:51.463662    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:45:51.478731    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:45:51.483202    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:45:51.500726    5332 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:45:51.617847    5332 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:45:51.707406    5332 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:45:51.812732    5332 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:49:52.505281    5332 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:49:52.505281    5332 kubeadm.go:319] 
	I1217 01:49:52.505281    5332 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:49:52.508266    5332 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:49:52.508266    5332 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:49:52.508266    5332 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:49:52.509270    5332 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 01:49:52.509270    5332 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 01:49:52.509270    5332 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 01:49:52.509270    5332 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 01:49:52.509270    5332 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 01:49:52.509270    5332 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 01:49:52.509270    5332 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 01:49:52.509270    5332 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 01:49:52.510272    5332 kubeadm.go:319] CONFIG_INET: enabled
	I1217 01:49:52.510272    5332 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 01:49:52.510272    5332 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 01:49:52.510272    5332 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 01:49:52.510272    5332 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 01:49:52.510272    5332 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 01:49:52.511276    5332 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 01:49:52.511276    5332 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 01:49:52.511276    5332 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 01:49:52.511276    5332 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 01:49:52.511276    5332 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 01:49:52.511276    5332 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 01:49:52.511276    5332 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 01:49:52.512288    5332 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 01:49:52.512288    5332 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 01:49:52.512288    5332 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 01:49:52.512288    5332 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 01:49:52.512288    5332 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 01:49:52.512288    5332 kubeadm.go:319] OS: Linux
	I1217 01:49:52.512288    5332 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:49:52.513262    5332 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:49:52.514276    5332 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:49:52.514276    5332 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:49:52.514276    5332 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:49:52.514276    5332 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:49:52.517267    5332 out.go:252]   - Generating certificates and keys ...
	I1217 01:49:52.517267    5332 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:49:52.518268    5332 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:49:52.518268    5332 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:49:52.518268    5332 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:49:52.518268    5332 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:49:52.518268    5332 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:49:52.518268    5332 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:49:52.518268    5332 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:49:52.519272    5332 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:49:52.519272    5332 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:49:52.519272    5332 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:49:52.519272    5332 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:49:52.519272    5332 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:49:52.519272    5332 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:49:52.519272    5332 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:49:52.519272    5332 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:49:52.519272    5332 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:49:52.520270    5332 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:49:52.520270    5332 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:49:52.523262    5332 out.go:252]   - Booting up control plane ...
	I1217 01:49:52.523262    5332 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:49:52.523262    5332 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:49:52.523262    5332 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:49:52.523262    5332 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:49:52.523262    5332 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:49:52.524264    5332 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:49:52.524264    5332 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:49:52.524264    5332 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:49:52.524264    5332 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:49:52.524264    5332 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:49:52.524264    5332 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001079838s
	I1217 01:49:52.524264    5332 kubeadm.go:319] 
	I1217 01:49:52.524264    5332 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:49:52.525257    5332 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:49:52.525257    5332 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:49:52.525257    5332 kubeadm.go:319] 
	I1217 01:49:52.525257    5332 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:49:52.525257    5332 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:49:52.525257    5332 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:49:52.525257    5332 kubeadm.go:319] 
	W1217 01:49:52.525257    5332 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001079838s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001079838s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:49:52.529259    5332 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 01:49:52.986866    5332 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:49:53.008878    5332 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:49:53.012876    5332 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:49:53.027863    5332 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:49:53.027863    5332 kubeadm.go:158] found existing configuration files:
	
	I1217 01:49:53.031858    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:49:53.043861    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:49:53.047868    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:49:53.064870    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:49:53.077871    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:49:53.081859    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:49:53.097870    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:49:53.111869    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:49:53.114871    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:49:53.134860    5332 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:49:53.663789    5332 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:49:53.668037    5332 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:49:53.685374    5332 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:49:53.834454    5332 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:49:53.922114    5332 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:49:54.041582    5332 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:53:54.778577    5332 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 01:53:54.778577    5332 kubeadm.go:319] 
	I1217 01:53:54.778577    5332 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:53:54.781570    5332 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:53:54.781570    5332 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:53:54.782573    5332 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:53:54.782573    5332 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 01:53:54.782573    5332 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 01:53:54.782573    5332 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 01:53:54.782573    5332 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 01:53:54.782573    5332 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 01:53:54.782573    5332 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 01:53:54.782573    5332 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_INET: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 01:53:54.783582    5332 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 01:53:54.784583    5332 kubeadm.go:319] OS: Linux
	I1217 01:53:54.784583    5332 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:53:54.785581    5332 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:53:54.786583    5332 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:53:54.786583    5332 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:53:54.786583    5332 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:53:54.797583    5332 out.go:252]   - Generating certificates and keys ...
	I1217 01:53:54.798583    5332 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:53:54.798583    5332 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:53:54.798583    5332 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 01:53:54.798583    5332 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 01:53:54.798583    5332 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 01:53:54.798583    5332 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 01:53:54.799580    5332 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 01:53:54.799580    5332 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 01:53:54.799580    5332 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 01:53:54.799580    5332 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 01:53:54.799580    5332 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 01:53:54.799580    5332 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:53:54.799580    5332 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:53:54.799580    5332 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:53:54.799580    5332 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:53:54.799580    5332 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:53:54.800583    5332 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:53:54.800583    5332 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:53:54.800583    5332 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:53:54.802584    5332 out.go:252]   - Booting up control plane ...
	I1217 01:53:54.803584    5332 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:53:54.803584    5332 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:53:54.803584    5332 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:53:54.803584    5332 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:53:54.803584    5332 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:53:54.803584    5332 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:53:54.804582    5332 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:53:54.804582    5332 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:53:54.804582    5332 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:53:54.804582    5332 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:53:54.804582    5332 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000349909s
	I1217 01:53:54.804582    5332 kubeadm.go:319] 
	I1217 01:53:54.804582    5332 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:53:54.804582    5332 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:53:54.805584    5332 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:53:54.805584    5332 kubeadm.go:319] 
	I1217 01:53:54.805584    5332 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:53:54.805584    5332 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:53:54.805584    5332 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:53:54.805584    5332 kubeadm.go:319] 
	I1217 01:53:54.805584    5332 kubeadm.go:403] duration metric: took 12m8.7934989s to StartCluster
	I1217 01:53:54.805584    5332 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 01:53:54.809583    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 01:53:54.873362    5332 cri.go:89] found id: ""
	I1217 01:53:54.873485    5332 logs.go:282] 0 containers: []
	W1217 01:53:54.873485    5332 logs.go:284] No container was found matching "kube-apiserver"
	I1217 01:53:54.873583    5332 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 01:53:54.879578    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 01:53:54.924666    5332 cri.go:89] found id: ""
	I1217 01:53:54.924719    5332 logs.go:282] 0 containers: []
	W1217 01:53:54.924776    5332 logs.go:284] No container was found matching "etcd"
	I1217 01:53:54.924776    5332 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 01:53:54.929751    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 01:53:54.976120    5332 cri.go:89] found id: ""
	I1217 01:53:54.976207    5332 logs.go:282] 0 containers: []
	W1217 01:53:54.976207    5332 logs.go:284] No container was found matching "coredns"
	I1217 01:53:54.976207    5332 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 01:53:54.981938    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 01:53:55.026991    5332 cri.go:89] found id: ""
	I1217 01:53:55.026991    5332 logs.go:282] 0 containers: []
	W1217 01:53:55.026991    5332 logs.go:284] No container was found matching "kube-scheduler"
	I1217 01:53:55.026991    5332 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 01:53:55.031834    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 01:53:55.077407    5332 cri.go:89] found id: ""
	I1217 01:53:55.077407    5332 logs.go:282] 0 containers: []
	W1217 01:53:55.077407    5332 logs.go:284] No container was found matching "kube-proxy"
	I1217 01:53:55.077407    5332 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 01:53:55.081408    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 01:53:55.133413    5332 cri.go:89] found id: ""
	I1217 01:53:55.133413    5332 logs.go:282] 0 containers: []
	W1217 01:53:55.133413    5332 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 01:53:55.133413    5332 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 01:53:55.137408    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 01:53:55.201411    5332 cri.go:89] found id: ""
	I1217 01:53:55.201411    5332 logs.go:282] 0 containers: []
	W1217 01:53:55.201411    5332 logs.go:284] No container was found matching "kindnet"
	I1217 01:53:55.201411    5332 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I1217 01:53:55.205423    5332 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I1217 01:53:55.254409    5332 cri.go:89] found id: ""
	I1217 01:53:55.254409    5332 logs.go:282] 0 containers: []
	W1217 01:53:55.254409    5332 logs.go:284] No container was found matching "storage-provisioner"
	I1217 01:53:55.254409    5332 logs.go:123] Gathering logs for kubelet ...
	I1217 01:53:55.254409    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 01:53:55.321405    5332 logs.go:123] Gathering logs for dmesg ...
	I1217 01:53:55.321405    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 01:53:55.793533    5332 logs.go:123] Gathering logs for describe nodes ...
	I1217 01:53:55.793533    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 01:53:55.895951    5332 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 01:53:55.895951    5332 logs.go:123] Gathering logs for Docker ...
	I1217 01:53:55.895951    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 01:53:55.927944    5332 logs.go:123] Gathering logs for container status ...
	I1217 01:53:55.927944    5332 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 01:53:56.010143    5332 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000349909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 01:53:56.010143    5332 out.go:285] * 
	* 
	W1217 01:53:56.010143    5332 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000349909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000349909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:53:56.010143    5332 out.go:285] * 
	* 
	W1217 01:53:56.012664    5332 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 01:53:56.019407    5332 out.go:203] 
	W1217 01:53:56.022627    5332 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000349909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000349909s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 01:53:56.022861    5332 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 01:53:56.022861    5332 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 01:53:56.025483    5332 out.go:203] 

                                                
                                                
** /stderr **
version_upgrade_test.go:245: failed to upgrade with newest k8s version. args: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-228200 --memory=3072 --kubernetes-version=v1.35.0-beta.0 --alsologtostderr -v=1 --driver=docker : exit status 109
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-228200 version --output=json
version_upgrade_test.go:248: (dbg) Non-zero exit: kubectl --context kubernetes-upgrade-228200 version --output=json: exit status 1 (10.2228223s)

                                                
                                                
-- stdout --
	{
	  "clientVersion": {
	    "major": "1",
	    "minor": "34",
	    "gitVersion": "v1.34.3",
	    "gitCommit": "df11db1c0f08fab3c0baee1e5ce6efbf816af7f1",
	    "gitTreeState": "clean",
	    "buildDate": "2025-12-09T15:06:39Z",
	    "goVersion": "go1.24.11",
	    "compiler": "gc",
	    "platform": "windows/amd64"
	  },
	  "kustomizeVersion": "v5.7.1"
	}

                                                
                                                
-- /stdout --
** stderr ** 
	Unable to connect to the server: EOF

                                                
                                                
** /stderr **
version_upgrade_test.go:250: error running kubectl: exit status 1
panic.go:615: *** TestKubernetesUpgrade FAILED at 2025-12-17 01:54:09.3614789 +0000 UTC m=+6544.383385601
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestKubernetesUpgrade]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestKubernetesUpgrade]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect kubernetes-upgrade-228200
helpers_test.go:244: (dbg) docker inspect kubernetes-upgrade-228200:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "781290120a285c969d36f5c61de0db52917f9a7e934c8ed77604b91f101b7417",
	        "Created": "2025-12-17T01:40:24.19832024Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 272939,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:41:19.123096913Z",
	            "FinishedAt": "2025-12-17T01:41:16.589129159Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/781290120a285c969d36f5c61de0db52917f9a7e934c8ed77604b91f101b7417/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/781290120a285c969d36f5c61de0db52917f9a7e934c8ed77604b91f101b7417/hostname",
	        "HostsPath": "/var/lib/docker/containers/781290120a285c969d36f5c61de0db52917f9a7e934c8ed77604b91f101b7417/hosts",
	        "LogPath": "/var/lib/docker/containers/781290120a285c969d36f5c61de0db52917f9a7e934c8ed77604b91f101b7417/781290120a285c969d36f5c61de0db52917f9a7e934c8ed77604b91f101b7417-json.log",
	        "Name": "/kubernetes-upgrade-228200",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "kubernetes-upgrade-228200:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "kubernetes-upgrade-228200",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/2b6dea5e9a145e06b56c81c4edcb2694d1aec681d9012ad983e6381e3e35bdf7-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/2b6dea5e9a145e06b56c81c4edcb2694d1aec681d9012ad983e6381e3e35bdf7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/2b6dea5e9a145e06b56c81c4edcb2694d1aec681d9012ad983e6381e3e35bdf7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/2b6dea5e9a145e06b56c81c4edcb2694d1aec681d9012ad983e6381e3e35bdf7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "kubernetes-upgrade-228200",
	                "Source": "/var/lib/docker/volumes/kubernetes-upgrade-228200/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "kubernetes-upgrade-228200",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "kubernetes-upgrade-228200",
	                "name.minikube.sigs.k8s.io": "kubernetes-upgrade-228200",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "3e87597cc2d545f8334ae9bea825334ae3ef73ca916bfa4033806c7666fbe11a",
	            "SandboxKey": "/var/run/docker/netns/3e87597cc2d5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60993"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60994"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60995"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60996"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "60997"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "kubernetes-upgrade-228200": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "afebce0091acb6fadc4e851c7b5831d3e243cd2abbc9f4b4c6c8cd974b5b87d1",
	                    "EndpointID": "ddf46974c8467ad2488032612bdc4e4a98490d043e8f27d85d8f92be60b18912",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "kubernetes-upgrade-228200",
	                        "781290120a28"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-228200 -n kubernetes-upgrade-228200
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-228200 -n kubernetes-upgrade-228200: exit status 2 (628.1958ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-228200 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-228200 logs -n 25: (1.2094436s)
helpers_test.go:261: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                 ARGS                                                                 │      PROFILE      │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-891300 sudo systemctl cat docker --no-pager                                                                                │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo cat /etc/docker/daemon.json                                                                                    │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo docker system info                                                                                             │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo systemctl status cri-docker --all --full --no-pager                                                            │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo systemctl cat cri-docker --no-pager                                                                            │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                       │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                 │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo cri-dockerd --version                                                                                          │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo systemctl status containerd --all --full --no-pager                                                            │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p kubenet-891300 pgrep -a kubelet                                                                                                   │ kubenet-891300    │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo systemctl cat containerd --no-pager                                                                            │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo cat /lib/systemd/system/containerd.service                                                                     │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo cat /etc/containerd/config.toml                                                                                │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo containerd config dump                                                                                         │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo systemctl status crio --all --full --no-pager                                                                  │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ ssh     │ -p bridge-891300 sudo systemctl cat crio --no-pager                                                                                  │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                        │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ ssh     │ -p bridge-891300 sudo crio config                                                                                                    │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ delete  │ -p bridge-891300                                                                                                                     │ bridge-891300     │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │ 17 Dec 25 01:53 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0 │ no-preload-184000 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:53 UTC │                     │
	│ ssh     │ -p kubenet-891300 sudo cat /etc/nsswitch.conf                                                                                        │ kubenet-891300    │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:54 UTC │ 17 Dec 25 01:54 UTC │
	│ ssh     │ -p kubenet-891300 sudo cat /etc/hosts                                                                                                │ kubenet-891300    │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:54 UTC │ 17 Dec 25 01:54 UTC │
	│ ssh     │ -p kubenet-891300 sudo cat /etc/resolv.conf                                                                                          │ kubenet-891300    │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:54 UTC │ 17 Dec 25 01:54 UTC │
	│ ssh     │ -p kubenet-891300 sudo crictl pods                                                                                                   │ kubenet-891300    │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:54 UTC │ 17 Dec 25 01:54 UTC │
	│ ssh     │ -p kubenet-891300 sudo crictl ps --all                                                                                               │ kubenet-891300    │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:54 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:53:56
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:53:56.107435    7596 out.go:360] Setting OutFile to fd 1164 ...
	I1217 01:53:56.151438    7596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:53:56.151438    7596 out.go:374] Setting ErrFile to fd 1324...
	I1217 01:53:56.151438    7596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:53:56.166456    7596 out.go:368] Setting JSON to false
	I1217 01:53:56.169445    7596 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8024,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:53:56.169445    7596 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:53:56.182449    7596 out.go:179] * [no-preload-184000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:53:56.193446    7596 notify.go:221] Checking for updates...
	I1217 01:53:56.196476    7596 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:53:56.200440    7596 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:53:56.202445    7596 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:53:56.205440    7596 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:53:56.207442    7596 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:53:53.041619    4316 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:53:53.084763    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:53:53.109051    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:53:53.127054    4316 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:53:53.152045    4316 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:53:53.163044    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:53:53.176057    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I1217 01:53:53.202754    4316 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:53:53.374921    4316 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:53:53.530138    4316 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:53:53.530665    4316 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:53:53.561635    4316 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:53:53.582853    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:53:53.773920    4316 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:53:54.742820    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:53:54.767572    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:53:54.791583    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:53:54.815569    4316 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:53:54.967450    4316 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:53:55.112403    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:53:55.261423    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:53:55.288405    4316 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:53:55.780000    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:53:55.937947    4316 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:53:56.063453    4316 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:53:56.082442    4316 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:53:56.085444    4316 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:53:56.092437    4316 start.go:564] Will wait 60s for crictl version
	I1217 01:53:56.096448    4316 ssh_runner.go:195] Run: which crictl
	I1217 01:53:56.107435    4316 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:53:56.151438    4316 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:53:56.155442    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:53:56.202445    4316 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:53:56.211438    7596 config.go:182] Loaded profile config "kubenet-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:53:56.211438    7596 config.go:182] Loaded profile config "kubernetes-upgrade-228200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:53:56.211438    7596 config.go:182] Loaded profile config "old-k8s-version-044000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1217 01:53:56.211438    7596 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:53:56.333443    7596 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:53:56.337450    7596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:53:56.624814    7596 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:53:56.601685599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:53:56.628822    7596 out.go:179] * Using the docker driver based on user configuration
	I1217 01:53:56.630821    7596 start.go:309] selected driver: docker
	I1217 01:53:56.630821    7596 start.go:927] validating driver "docker" against <nil>
	I1217 01:53:56.630821    7596 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:53:56.697176    7596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:53:56.962177    7596 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:53:56.940921788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:53:56.963186    7596 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 01:53:56.964178    7596 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:53:56.967182    7596 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:53:56.971175    7596 cni.go:84] Creating CNI manager for ""
	I1217 01:53:56.971175    7596 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:53:56.971175    7596 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:53:56.971175    7596 start.go:353] cluster config:
	{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:53:56.975173    7596 out.go:179] * Starting "no-preload-184000" primary control-plane node in "no-preload-184000" cluster
	I1217 01:53:56.977175    7596 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:53:56.979182    7596 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:53:56.982182    7596 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:53:56.982182    7596 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:53:56.982182    7596 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 01:53:56.983178    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json: {Name:mk142cf71314bd75adaac8add25d15852fa59f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 01:53:57.314244    7596 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:53:57.314244    7596 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:53:57.314244    7596 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:53:57.314244    7596 start.go:360] acquireMachinesLock for no-preload-184000: {Name:mk58fd592c3ebf84a2801325b861ffe90e12015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:57.314244    7596 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-184000"
	I1217 01:53:57.314244    7596 start.go:93] Provisioning new machine with config: &{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:53:57.314244    7596 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:53:56.249448    4316 out.go:252] * Preparing Kubernetes v1.28.0 on Docker 29.1.3 ...
	I1217 01:53:56.253447    4316 cli_runner.go:164] Run: docker exec -t old-k8s-version-044000 dig +short host.docker.internal
	I1217 01:53:56.390452    4316 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:53:56.397455    4316 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:53:56.422822    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:53:56.454828    4316 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" old-k8s-version-044000
	I1217 01:53:56.520813    4316 kubeadm.go:884] updating cluster {Name:old-k8s-version-044000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-044000 Namespace:default APIServerHAVIP: APIServerName:minik
ubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemu
FirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:53:56.520813    4316 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 01:53:56.525814    4316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:53:56.563831    4316 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:53:56.563831    4316 docker.go:621] Images already preloaded, skipping extraction
	I1217 01:53:56.569820    4316 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:53:56.612822    4316 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.28.0
	registry.k8s.io/kube-scheduler:v1.28.0
	registry.k8s.io/kube-controller-manager:v1.28.0
	registry.k8s.io/kube-proxy:v1.28.0
	registry.k8s.io/etcd:3.5.9-0
	registry.k8s.io/coredns/coredns:v1.10.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:53:56.612822    4316 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:53:56.612822    4316 kubeadm.go:935] updating node { 192.168.85.2 8443 v1.28.0 docker true true} ...
	I1217 01:53:56.612822    4316 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=old-k8s-version-044000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.85.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-044000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:53:56.617813    4316 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:53:56.726172    4316 cni.go:84] Creating CNI manager for ""
	I1217 01:53:56.726172    4316 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:53:56.726172    4316 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:53:56.726172    4316 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.85.2 APIServerPort:8443 KubernetesVersion:v1.28.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-044000 NodeName:old-k8s-version-044000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.85.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.85.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Static
PodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:53:56.726172    4316 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.85.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "old-k8s-version-044000"
	  kubeletExtraArgs:
	    node-ip: 192.168.85.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.85.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:53:56.730184    4316 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.0
	I1217 01:53:56.768184    4316 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:53:56.776181    4316 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:53:56.807176    4316 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (321 bytes)
	I1217 01:53:56.832185    4316 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1217 01:53:56.854179    4316 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2164 bytes)
	I1217 01:53:56.882186    4316 ssh_runner.go:195] Run: grep 192.168.85.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:53:56.890175    4316 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.85.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:53:56.912178    4316 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:53:57.312246    4316 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:53:57.350242    4316 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000 for IP: 192.168.85.2
	I1217 01:53:57.351261    4316 certs.go:195] generating shared ca certs ...
	I1217 01:53:57.351261    4316 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:57.352258    4316 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:53:57.352258    4316 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:53:57.353232    4316 certs.go:257] generating profile certs ...
	I1217 01:53:57.354243    4316 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\client.key
	I1217 01:53:57.354243    4316 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\client.crt with IP's: []
	I1217 01:53:57.469349    4316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\client.crt ...
	I1217 01:53:57.469349    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\client.crt: {Name:mk2ad90f12bff0cf11c3674ef281380dfd12f10f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:57.471621    4316 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\client.key ...
	I1217 01:53:57.471621    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\client.key: {Name:mk6e3e5cd61308910568dfdbfb3757f3aeff35df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:57.473510    4316 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.key.c34dc226
	I1217 01:53:57.473510    4316 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.crt.c34dc226 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.85.2]
	I1217 01:53:57.318247    7596 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:53:57.318247    7596 start.go:159] libmachine.API.Create for "no-preload-184000" (driver="docker")
	I1217 01:53:57.318247    7596 client.go:173] LocalClient.Create starting
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Decoding PEM data...
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Parsing certificate...
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Decoding PEM data...
	I1217 01:53:57.320245    7596 main.go:143] libmachine: Parsing certificate...
	I1217 01:53:57.327242    7596 cli_runner.go:164] Run: docker network inspect no-preload-184000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:53:57.409235    7596 cli_runner.go:211] docker network inspect no-preload-184000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:53:57.418254    7596 network_create.go:284] running [docker network inspect no-preload-184000] to gather additional debugging logs...
	I1217 01:53:57.418254    7596 cli_runner.go:164] Run: docker network inspect no-preload-184000
	W1217 01:53:57.537210    7596 cli_runner.go:211] docker network inspect no-preload-184000 returned with exit code 1
	I1217 01:53:57.537210    7596 network_create.go:287] error running [docker network inspect no-preload-184000]: docker network inspect no-preload-184000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-184000 not found
	I1217 01:53:57.537210    7596 network_create.go:289] output of [docker network inspect no-preload-184000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-184000 not found
	
	** /stderr **
	I1217 01:53:57.542195    7596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:53:58.593244    7596 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0510331s)
	I1217 01:53:58.634367    7596 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:58.700722    7596 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.010012    7596 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.041404    7596 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.072671    7596 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.095233    7596 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019a7350}
	I1217 01:53:59.095328    7596 network_create.go:124] attempt to create docker network no-preload-184000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 01:53:59.100874    7596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-184000 no-preload-184000
	I1217 01:53:59.353785    7596 network_create.go:108] docker network no-preload-184000 192.168.94.0/24 created
	I1217 01:53:59.353785    7596 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-184000" container
	I1217 01:53:59.363785    7596 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:53:59.458651    7596 cli_runner.go:164] Run: docker volume create no-preload-184000 --label name.minikube.sigs.k8s.io=no-preload-184000 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:53:59.550942    7596 oci.go:103] Successfully created a docker volume no-preload-184000
	I1217 01:53:59.557019    7596 cli_runner.go:164] Run: docker run --rm --name no-preload-184000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-184000 --entrypoint /usr/bin/test -v no-preload-184000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:54:00.152301    7596 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.152892    7596 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:00.155046    7596 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.155046    7596 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:00.165045    7596 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:00.166020    7596 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:00.186017    7596 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.187021    7596 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:00.188027    7596 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.188027    7596 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1217 01:54:00.188027    7596 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2057987s
	I1217 01:54:00.188027    7596 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1217 01:54:00.198039    7596 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:00.202026    7596 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.203026    7596 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1217 01:54:00.203026    7596 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2207971s
	I1217 01:54:00.203026    7596 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1217 01:54:00.242023    7596 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.242023    7596 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1217 01:54:00.243028    7596 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2598025s
	I1217 01:54:00.243028    7596 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1217 01:54:00.245024    7596 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.245024    7596 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:00.256030    7596 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:00.259026    7596 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.259026    7596 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1217 01:54:00.263046    7596 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1217 01:54:00.268032    7596 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1217 01:54:00.315037    7596 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:00.374022    7596 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:00.442024    7596 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:00.510025    7596 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1217 01:54:00.784655    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 01:54:00.786656    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 01:54:00.799658    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 01:54:00.838120    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 01:54:00.845104    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:58.621613    4316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.crt.c34dc226 ...
	I1217 01:53:58.621613    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.crt.c34dc226: {Name:mkb718e26fff47721f4fab6ffcd8744c7ea3f59b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:58.633366    4316 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.key.c34dc226 ...
	I1217 01:53:58.633366    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.key.c34dc226: {Name:mk4b96d2ead8fe575b406f56939b0b8a325eab81 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:58.641968    4316 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.crt.c34dc226 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.crt
	I1217 01:53:58.656968    4316 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.key.c34dc226 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.key
	I1217 01:53:58.660967    4316 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.key
	I1217 01:53:58.660967    4316 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.crt with IP's: []
	I1217 01:53:58.717374    4316 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.crt ...
	I1217 01:53:58.717374    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.crt: {Name:mkef27a3dd75f709dcd3e39dd6e14455a21833b5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:58.740010    4316 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.key ...
	I1217 01:53:58.740010    4316 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.key: {Name:mke6560a8c9b7d202274700d50210784a9a867c8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:58.783772    4316 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:53:58.783772    4316 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:53:58.783772    4316 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:53:58.784784    4316 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:53:58.784784    4316 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:53:58.784784    4316 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:53:58.784784    4316 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:53:58.786786    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:53:58.870037    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:53:58.960687    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:53:59.022619    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:53:59.070680    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I1217 01:53:59.118925    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:53:59.171916    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:53:59.217014    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\old-k8s-version-044000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:53:59.271247    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:53:59.321779    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:53:59.365786    4316 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:53:59.433070    4316 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:53:59.475693    4316 ssh_runner.go:195] Run: openssl version
	I1217 01:53:59.503661    4316 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:53:59.536202    4316 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:53:59.572189    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:53:59.581199    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:53:59.587207    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:53:59.667892    4316 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:53:59.894244    4316 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:53:59.918613    4316 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:53:59.946112    4316 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:53:59.970955    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:53:59.981732    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:53:59.988979    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:54:00.069575    4316 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:54:00.090577    4316 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:54:00.115402    4316 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:54:00.143052    4316 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:54:00.169026    4316 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:54:00.179052    4316 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:54:00.186017    4316 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:54:00.252021    4316 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:54:00.271025    4316 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 01:54:00.288023    4316 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:54:00.295033    4316 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:54:00.296019    4316 kubeadm.go:401] StartCluster: {Name:old-k8s-version-044000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:old-k8s-version-044000 Namespace:default APIServerHAVIP: APIServerName:minikube
CA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFir
mwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:54:00.299019    4316 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:54:00.342017    4316 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:54:00.362027    4316 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:54:00.378024    4316 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:54:00.384025    4316 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:54:00.399025    4316 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:54:00.399025    4316 kubeadm.go:158] found existing configuration files:
	
	I1217 01:54:00.403031    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:54:00.427039    4316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:54:00.433034    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:54:00.454037    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:54:00.469032    4316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:54:00.476068    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:54:00.502033    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:54:00.519031    4316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:54:00.525032    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:54:00.548027    4316 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:54:00.565027    4316 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:54:00.571026    4316 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:54:00.593030    4316 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.28.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:54:00.794648    4316 kubeadm.go:319] 	[WARNING Swap]: swap is enabled; production deployments should disable swap unless testing the NodeSwap feature gate of the kubelet
	I1217 01:54:00.930137    4316 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:54:01.223985    7596 cli_runner.go:217] Completed: docker run --rm --name no-preload-184000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-184000 --entrypoint /usr/bin/test -v no-preload-184000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.6668975s)
	I1217 01:54:01.223985    7596 oci.go:107] Successfully prepared a docker volume no-preload-184000
	I1217 01:54:01.223985    7596 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:54:01.226983    7596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:54:01.478990    7596 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:54:01.455886868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:54:01.483991    7596 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:54:01.658880    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1217 01:54:01.658880    7596 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 4.6756332s
	I1217 01:54:01.658880    7596 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1217 01:54:01.755214    7596 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-184000 --name no-preload-184000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-184000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-184000 --network no-preload-184000 --ip 192.168.94.2 --volume no-preload-184000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:54:02.468284    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Running}}
	I1217 01:54:02.535939    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 01:54:02.604940    7596 cli_runner.go:164] Run: docker exec no-preload-184000 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:54:02.745944    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1217 01:54:02.745944    7596 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 5.763678s
	I1217 01:54:02.745944    7596 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 01:54:02.747953    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1217 01:54:02.747953    7596 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 5.7646901s
	I1217 01:54:02.747953    7596 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1217 01:54:02.750945    7596 oci.go:144] the created container "no-preload-184000" has a running status.
	I1217 01:54:02.750945    7596 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa...
	I1217 01:54:02.798670    7596 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:54:02.865326    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1217 01:54:02.865326    7596 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 5.8820616s
	I1217 01:54:02.865326    7596 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 01:54:02.888255    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 01:54:02.923275    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1217 01:54:02.923275    7596 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 5.9400098s
	I1217 01:54:02.923275    7596 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 01:54:02.923275    7596 cache.go:87] Successfully saved all images to host disk.
	I1217 01:54:02.951256    7596 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:54:02.951256    7596 kic_runner.go:114] Args: [docker exec --privileged no-preload-184000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:54:03.086147    7596 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa...
	I1217 01:54:05.281518    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 01:54:05.330994    7596 machine.go:94] provisionDockerMachine start ...
	I1217 01:54:05.334951    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:05.395073    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:05.408226    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:05.408226    7596 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:54:05.599714    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 01:54:05.599714    7596 ubuntu.go:182] provisioning hostname "no-preload-184000"
	I1217 01:54:05.603713    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:05.655709    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:05.655709    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:05.655709    7596 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-184000 && echo "no-preload-184000" | sudo tee /etc/hostname
	I1217 01:54:05.836444    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 01:54:05.839784    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:05.895965    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:05.897008    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:05.897008    7596 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184000/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:54:06.072684    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:54:06.072684    7596 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:54:06.072684    7596 ubuntu.go:190] setting up certificates
	I1217 01:54:06.072684    7596 provision.go:84] configureAuth start
	I1217 01:54:06.077293    7596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 01:54:06.133029    7596 provision.go:143] copyHostCerts
	I1217 01:54:06.133029    7596 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:54:06.133029    7596 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:54:06.133708    7596 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:54:06.134317    7596 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:54:06.134317    7596 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:54:06.134317    7596 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:54:06.135112    7596 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:54:06.135112    7596 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:54:06.135784    7596 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:54:06.136394    7596 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-184000 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-184000]
	
	
	==> Docker <==
	Dec 17 01:41:36 kubernetes-upgrade-228200 systemd[1]: Starting docker.service - Docker Application Container Engine...
	Dec 17 01:41:36 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:36.624110994Z" level=info msg="Starting up"
	Dec 17 01:41:36 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:36.648859955Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider"
	Dec 17 01:41:36 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:36.649020169Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/etc/cdi
	Dec 17 01:41:36 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:36.649034370Z" level=info msg="CDI directory does not exist, skipping: failed to monitor for changes: no such file or directory" dir=/var/run/cdi
	Dec 17 01:41:36 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:36.665137476Z" level=info msg="Creating a containerd client" address=/run/containerd/containerd.sock timeout=1m0s
	Dec 17 01:41:36 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:36.824810019Z" level=info msg="Loading containers: start."
	Dec 17 01:41:36 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:36.826850697Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.036036473Z" level=info msg="Restoring containers: start."
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.174491639Z" level=info msg="Deleting nftables IPv4 rules" error="exit status 1"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.234484053Z" level=info msg="Deleting nftables IPv6 rules" error="exit status 1"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.693356105Z" level=info msg="Loading containers: done."
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.725576759Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.725656566Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.725666267Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.725671968Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.725679068Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.725703870Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.725791878Z" level=info msg="Initializing buildkit"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.854336466Z" level=info msg="Completed buildkit initialization"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.859935162Z" level=info msg="Daemon has completed initialization"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.860165082Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.860220987Z" level=info msg="API listen on [::]:2376"
	Dec 17 01:41:43 kubernetes-upgrade-228200 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 01:41:43 kubernetes-upgrade-228200 dockerd[1440]: time="2025-12-17T01:41:43.860223688Z" level=info msg="API listen on /var/run/docker.sock"
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.670491] CPU: 12 PID: 408572 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000052] RIP: 0033:0x7f6abf162b20
	[  +0.000009] Code: Unable to access opcode bytes at RIP 0x7f6abf162af6.
	[  +0.000001] RSP: 002b:00007ffd17a57c20 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000002] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +0.890723] CPU: 15 PID: 408828 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7fa84658cb20
	[  +0.000009] Code: Unable to access opcode bytes at RIP 0x7fa84658caf6.
	[  +0.000001] RSP: 002b:00007ffd645ca590 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000004] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 01:54:11 up  2:13,  0 user,  load average: 5.30, 5.06, 4.18
	Linux kubernetes-upgrade-228200 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 01:54:07 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:54:08 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 335.
	Dec 17 01:54:08 kubernetes-upgrade-228200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:08 kubernetes-upgrade-228200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:08 kubernetes-upgrade-228200 kubelet[26089]: E1217 01:54:08.660525   26089 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:54:08 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:54:08 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:54:09 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 336.
	Dec 17 01:54:09 kubernetes-upgrade-228200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:09 kubernetes-upgrade-228200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:09 kubernetes-upgrade-228200 kubelet[26102]: E1217 01:54:09.433805   26102 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:54:09 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:54:09 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 337.
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:10 kubernetes-upgrade-228200 kubelet[26130]: E1217 01:54:10.174432   26130 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 338.
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 01:54:10 kubernetes-upgrade-228200 kubelet[26224]: E1217 01:54:10.919045   26224 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 01:54:10 kubernetes-upgrade-228200 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-228200 -n kubernetes-upgrade-228200
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-228200 -n kubernetes-upgrade-228200: exit status 2 (663.5598ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "kubernetes-upgrade-228200" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:176: Cleaning up "kubernetes-upgrade-228200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-228200
E1217 01:54:12.092725    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-228200: (3.4478811s)
--- FAIL: TestKubernetesUpgrade (876.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (540.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m57.5987818s)

                                                
                                                
-- stdout --
	* [no-preload-184000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "no-preload-184000" primary control-plane node in "no-preload-184000" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:53:56.107435    7596 out.go:360] Setting OutFile to fd 1164 ...
	I1217 01:53:56.151438    7596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:53:56.151438    7596 out.go:374] Setting ErrFile to fd 1324...
	I1217 01:53:56.151438    7596 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:53:56.166456    7596 out.go:368] Setting JSON to false
	I1217 01:53:56.169445    7596 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8024,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:53:56.169445    7596 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:53:56.182449    7596 out.go:179] * [no-preload-184000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:53:56.193446    7596 notify.go:221] Checking for updates...
	I1217 01:53:56.196476    7596 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:53:56.200440    7596 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:53:56.202445    7596 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:53:56.205440    7596 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:53:56.207442    7596 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:53:56.211438    7596 config.go:182] Loaded profile config "kubenet-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:53:56.211438    7596 config.go:182] Loaded profile config "kubernetes-upgrade-228200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:53:56.211438    7596 config.go:182] Loaded profile config "old-k8s-version-044000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.28.0
	I1217 01:53:56.211438    7596 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:53:56.333443    7596 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:53:56.337450    7596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:53:56.624814    7596 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:53:56.601685599 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:53:56.628822    7596 out.go:179] * Using the docker driver based on user configuration
	I1217 01:53:56.630821    7596 start.go:309] selected driver: docker
	I1217 01:53:56.630821    7596 start.go:927] validating driver "docker" against <nil>
	I1217 01:53:56.630821    7596 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:53:56.697176    7596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:53:56.962177    7596 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:94 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:53:56.940921788 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:53:56.963186    7596 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 01:53:56.964178    7596 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:53:56.967182    7596 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:53:56.971175    7596 cni.go:84] Creating CNI manager for ""
	I1217 01:53:56.971175    7596 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:53:56.971175    7596 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:53:56.971175    7596 start.go:353] cluster config:
	{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAut
hSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:53:56.975173    7596 out.go:179] * Starting "no-preload-184000" primary control-plane node in "no-preload-184000" cluster
	I1217 01:53:56.977175    7596 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:53:56.979182    7596 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:53:56.982182    7596 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:53:56.982182    7596 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:53:56.982182    7596 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1217 01:53:56.982182    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 01:53:56.983178    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json: {Name:mk142cf71314bd75adaac8add25d15852fa59f75 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1217 01:53:56.983178    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 01:53:57.314244    7596 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:53:57.314244    7596 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:53:57.314244    7596 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:53:57.314244    7596 start.go:360] acquireMachinesLock for no-preload-184000: {Name:mk58fd592c3ebf84a2801325b861ffe90e12015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:53:57.314244    7596 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-184000"
	I1217 01:53:57.314244    7596 start.go:93] Provisioning new machine with config: &{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:fals
e CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:53:57.314244    7596 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:53:57.318247    7596 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:53:57.318247    7596 start.go:159] libmachine.API.Create for "no-preload-184000" (driver="docker")
	I1217 01:53:57.318247    7596 client.go:173] LocalClient.Create starting
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Decoding PEM data...
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Parsing certificate...
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:53:57.319247    7596 main.go:143] libmachine: Decoding PEM data...
	I1217 01:53:57.320245    7596 main.go:143] libmachine: Parsing certificate...
	I1217 01:53:57.327242    7596 cli_runner.go:164] Run: docker network inspect no-preload-184000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:53:57.409235    7596 cli_runner.go:211] docker network inspect no-preload-184000 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:53:57.418254    7596 network_create.go:284] running [docker network inspect no-preload-184000] to gather additional debugging logs...
	I1217 01:53:57.418254    7596 cli_runner.go:164] Run: docker network inspect no-preload-184000
	W1217 01:53:57.537210    7596 cli_runner.go:211] docker network inspect no-preload-184000 returned with exit code 1
	I1217 01:53:57.537210    7596 network_create.go:287] error running [docker network inspect no-preload-184000]: docker network inspect no-preload-184000: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network no-preload-184000 not found
	I1217 01:53:57.537210    7596 network_create.go:289] output of [docker network inspect no-preload-184000]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network no-preload-184000 not found
	
	** /stderr **
	I1217 01:53:57.542195    7596 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:53:58.593244    7596 cli_runner.go:217] Completed: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}": (1.0510331s)
	I1217 01:53:58.634367    7596 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:58.700722    7596 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.010012    7596 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.041404    7596 network.go:209] skipping subnet 192.168.76.0/24 that is reserved: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.072671    7596 network.go:209] skipping subnet 192.168.85.0/24 that is reserved: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:53:59.095233    7596 network.go:206] using free private subnet 192.168.94.0/24: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0019a7350}
	I1217 01:53:59.095328    7596 network_create.go:124] attempt to create docker network no-preload-184000 192.168.94.0/24 with gateway 192.168.94.1 and MTU of 1500 ...
	I1217 01:53:59.100874    7596 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.94.0/24 --gateway=192.168.94.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=no-preload-184000 no-preload-184000
	I1217 01:53:59.353785    7596 network_create.go:108] docker network no-preload-184000 192.168.94.0/24 created
	I1217 01:53:59.353785    7596 kic.go:121] calculated static IP "192.168.94.2" for the "no-preload-184000" container
	I1217 01:53:59.363785    7596 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:53:59.458651    7596 cli_runner.go:164] Run: docker volume create no-preload-184000 --label name.minikube.sigs.k8s.io=no-preload-184000 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:53:59.550942    7596 oci.go:103] Successfully created a docker volume no-preload-184000
	I1217 01:53:59.557019    7596 cli_runner.go:164] Run: docker run --rm --name no-preload-184000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-184000 --entrypoint /usr/bin/test -v no-preload-184000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:54:00.152301    7596 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.152892    7596 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:00.155046    7596 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.155046    7596 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:00.165045    7596 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:00.166020    7596 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:00.186017    7596 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.187021    7596 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:00.188027    7596 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.188027    7596 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1217 01:54:00.188027    7596 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 3.2057987s
	I1217 01:54:00.188027    7596 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1217 01:54:00.198039    7596 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:00.202026    7596 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.203026    7596 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1217 01:54:00.203026    7596 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.2207971s
	I1217 01:54:00.203026    7596 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1217 01:54:00.242023    7596 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.242023    7596 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1217 01:54:00.243028    7596 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 3.2598025s
	I1217 01:54:00.243028    7596 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1217 01:54:00.245024    7596 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.245024    7596 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:00.256030    7596 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:00.259026    7596 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:54:00.259026    7596 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1217 01:54:00.263046    7596 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1217 01:54:00.268032    7596 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	W1217 01:54:00.315037    7596 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:00.374022    7596 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:00.442024    7596 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:00.510025    7596 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1217 01:54:00.784655    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 01:54:00.786656    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 01:54:00.799658    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 01:54:00.838120    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 01:54:00.845104    7596 cache.go:162] opening:  \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 01:54:01.223985    7596 cli_runner.go:217] Completed: docker run --rm --name no-preload-184000-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-184000 --entrypoint /usr/bin/test -v no-preload-184000:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.6668975s)
	I1217 01:54:01.223985    7596 oci.go:107] Successfully prepared a docker volume no-preload-184000
	I1217 01:54:01.223985    7596 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:54:01.226983    7596 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:54:01.478990    7596 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:54:01.455886868 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:54:01.483991    7596 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:54:01.658880    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1217 01:54:01.658880    7596 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 4.6756332s
	I1217 01:54:01.658880    7596 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1217 01:54:01.755214    7596 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname no-preload-184000 --name no-preload-184000 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=no-preload-184000 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=no-preload-184000 --network no-preload-184000 --ip 192.168.94.2 --volume no-preload-184000:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:54:02.468284    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Running}}
	I1217 01:54:02.535939    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 01:54:02.604940    7596 cli_runner.go:164] Run: docker exec no-preload-184000 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:54:02.745944    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1217 01:54:02.745944    7596 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 5.763678s
	I1217 01:54:02.745944    7596 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 01:54:02.747953    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1217 01:54:02.747953    7596 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 5.7646901s
	I1217 01:54:02.747953    7596 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1217 01:54:02.750945    7596 oci.go:144] the created container "no-preload-184000" has a running status.
	I1217 01:54:02.750945    7596 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa...
	I1217 01:54:02.798670    7596 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:54:02.865326    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1217 01:54:02.865326    7596 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 5.8820616s
	I1217 01:54:02.865326    7596 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 01:54:02.888255    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 01:54:02.923275    7596 cache.go:157] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1217 01:54:02.923275    7596 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 5.9400098s
	I1217 01:54:02.923275    7596 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 01:54:02.923275    7596 cache.go:87] Successfully saved all images to host disk.
	I1217 01:54:02.951256    7596 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:54:02.951256    7596 kic_runner.go:114] Args: [docker exec --privileged no-preload-184000 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:54:03.086147    7596 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa...
	I1217 01:54:05.281518    7596 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 01:54:05.330994    7596 machine.go:94] provisionDockerMachine start ...
	I1217 01:54:05.334951    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:05.395073    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:05.408226    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:05.408226    7596 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:54:05.599714    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 01:54:05.599714    7596 ubuntu.go:182] provisioning hostname "no-preload-184000"
	I1217 01:54:05.603713    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:05.655709    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:05.655709    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:05.655709    7596 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-184000 && echo "no-preload-184000" | sudo tee /etc/hostname
	I1217 01:54:05.836444    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 01:54:05.839784    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:05.895965    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:05.897008    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:05.897008    7596 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184000/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:54:06.072684    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:54:06.072684    7596 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:54:06.072684    7596 ubuntu.go:190] setting up certificates
	I1217 01:54:06.072684    7596 provision.go:84] configureAuth start
	I1217 01:54:06.077293    7596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 01:54:06.133029    7596 provision.go:143] copyHostCerts
	I1217 01:54:06.133029    7596 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:54:06.133029    7596 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:54:06.133708    7596 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:54:06.134317    7596 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:54:06.134317    7596 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:54:06.134317    7596 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:54:06.135112    7596 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:54:06.135112    7596 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:54:06.135784    7596 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:54:06.136394    7596 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-184000 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-184000]
	I1217 01:54:06.167245    7596 provision.go:177] copyRemoteCerts
	I1217 01:54:06.171434    7596 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:54:06.175590    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:06.228039    7596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62904 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 01:54:06.347976    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:54:06.379292    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:54:06.408590    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 01:54:06.439807    7596 provision.go:87] duration metric: took 367.1186ms to configureAuth
	I1217 01:54:06.439906    7596 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:54:06.440080    7596 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:54:06.444118    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:06.505052    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:06.505052    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:06.505052    7596 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:54:06.673965    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:54:06.673965    7596 ubuntu.go:71] root file system type: overlay
	I1217 01:54:06.674505    7596 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:54:06.678339    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:06.736413    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:06.737026    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:06.737026    7596 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:54:06.924620    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:54:06.927625    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:06.986141    7596 main.go:143] libmachine: Using SSH client type: native
	I1217 01:54:06.987622    7596 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 62904 <nil> <nil>}
	I1217 01:54:06.987622    7596 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:54:08.492536    7596 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:54:06.920075259 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1217 01:54:08.492536    7596 machine.go:97] duration metric: took 3.1614961s to provisionDockerMachine
	I1217 01:54:08.492536    7596 client.go:176] duration metric: took 11.1741253s to LocalClient.Create
	I1217 01:54:08.492536    7596 start.go:167] duration metric: took 11.1741253s to libmachine.API.Create "no-preload-184000"
	I1217 01:54:08.492536    7596 start.go:293] postStartSetup for "no-preload-184000" (driver="docker")
	I1217 01:54:08.492536    7596 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:54:08.498538    7596 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:54:08.503540    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:08.556548    7596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62904 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 01:54:08.696406    7596 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:54:08.704365    7596 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:54:08.704365    7596 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:54:08.704365    7596 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 01:54:08.704365    7596 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 01:54:08.704365    7596 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 01:54:08.709369    7596 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:54:08.722374    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 01:54:08.750371    7596 start.go:296] duration metric: took 257.8315ms for postStartSetup
	I1217 01:54:08.755376    7596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 01:54:08.814373    7596 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 01:54:08.820369    7596 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:54:08.824376    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:08.881320    7596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62904 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 01:54:09.014069    7596 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:54:09.023079    7596 start.go:128] duration metric: took 11.7086635s to createHost
	I1217 01:54:09.023079    7596 start.go:83] releasing machines lock for "no-preload-184000", held for 11.7086635s
	I1217 01:54:09.027070    7596 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 01:54:09.080057    7596 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 01:54:09.084053    7596 ssh_runner.go:195] Run: cat /version.json
	I1217 01:54:09.084053    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:09.087065    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:09.140051    7596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62904 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 01:54:09.141055    7596 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:62904 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	W1217 01:54:09.251845    7596 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 01:54:09.262915    7596 ssh_runner.go:195] Run: systemctl --version
	I1217 01:54:09.284177    7596 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:54:09.300498    7596 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:54:09.305577    7596 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:54:09.355471    7596 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:54:09.356463    7596 start.go:496] detecting cgroup driver to use...
	I1217 01:54:09.356463    7596 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:54:09.356463    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:54:09.389475    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 01:54:09.414466    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:54:09.432477    7596 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:54:09.436468    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	W1217 01:54:09.448475    7596 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 01:54:09.448475    7596 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 01:54:09.456474    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:54:09.483465    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:54:09.510478    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:54:09.538480    7596 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:54:09.562469    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:54:09.586477    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:54:09.609469    7596 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:54:09.629466    7596 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:54:09.646471    7596 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:54:09.664468    7596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:54:09.836302    7596 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:54:10.008703    7596 start.go:496] detecting cgroup driver to use...
	I1217 01:54:10.008703    7596 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:54:10.013694    7596 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:54:10.039689    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:54:10.062692    7596 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:54:10.113682    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:54:10.139687    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:54:10.159681    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:54:10.189695    7596 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:54:10.201695    7596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:54:10.216689    7596 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 01:54:10.242682    7596 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:54:10.391688    7596 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:54:10.547685    7596 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:54:10.547685    7596 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:54:10.573692    7596 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:54:10.600201    7596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:54:10.762160    7596 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:54:11.837161    7596 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.074985s)
	I1217 01:54:11.842157    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:54:11.867190    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:54:11.899433    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:54:11.927472    7596 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:54:12.098729    7596 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:54:12.272755    7596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:54:12.445743    7596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:54:12.475736    7596 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:54:12.506736    7596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:54:12.662737    7596 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:54:12.786247    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:54:12.804156    7596 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:54:12.808161    7596 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:54:12.816316    7596 start.go:564] Will wait 60s for crictl version
	I1217 01:54:12.822532    7596 ssh_runner.go:195] Run: which crictl
	I1217 01:54:12.834531    7596 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:54:12.884329    7596 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:54:12.889333    7596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:54:12.940322    7596 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:54:12.985701    7596 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 01:54:12.988688    7596 cli_runner.go:164] Run: docker exec -t no-preload-184000 dig +short host.docker.internal
	I1217 01:54:13.134697    7596 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:54:13.139711    7596 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:54:13.149699    7596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:54:13.170698    7596 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	I1217 01:54:13.226694    7596 kubeadm.go:884] updating cluster {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Custo
mQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:54:13.226694    7596 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:54:13.230702    7596 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:54:13.263705    7596 docker.go:691] Got preloaded images: 
	I1217 01:54:13.263705    7596 docker.go:697] registry.k8s.io/kube-apiserver:v1.35.0-beta.0 wasn't preloaded
	I1217 01:54:13.263705    7596 cache_images.go:90] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.35.0-beta.0 registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 registry.k8s.io/kube-scheduler:v1.35.0-beta.0 registry.k8s.io/kube-proxy:v1.35.0-beta.0 registry.k8s.io/pause:3.10.1 registry.k8s.io/etcd:3.6.5-0 registry.k8s.io/coredns/coredns:v1.13.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I1217 01:54:13.272697    7596 image.go:138] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:54:13.278698    7596 image.go:138] retrieving image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:13.281707    7596 image.go:181] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:54:13.282718    7596 image.go:138] retrieving image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:13.287724    7596 image.go:138] retrieving image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:54:13.288707    7596 image.go:181] daemon lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:13.292708    7596 image.go:181] daemon lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:13.294701    7596 image.go:138] retrieving image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:54:13.298706    7596 image.go:181] daemon lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:54:13.299711    7596 image.go:138] retrieving image: registry.k8s.io/pause:3.10.1
	I1217 01:54:13.302700    7596 image.go:181] daemon lookup for registry.k8s.io/etcd:3.6.5-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:54:13.304713    7596 image.go:138] retrieving image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:13.307700    7596 image.go:138] retrieving image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:13.308728    7596 image.go:181] daemon lookup for registry.k8s.io/pause:3.10.1: Error response from daemon: No such image: registry.k8s.io/pause:3.10.1
	I1217 01:54:13.314706    7596 image.go:181] daemon lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:13.316723    7596 image.go:181] daemon lookup for registry.k8s.io/coredns/coredns:v1.13.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.13.1
	W1217 01:54:13.341695    7596 image.go:191] authn lookup for gcr.io/k8s-minikube/storage-provisioner:v5 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:13.395694    7596 image.go:191] authn lookup for registry.k8s.io/kube-proxy:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:13.445702    7596 image.go:191] authn lookup for registry.k8s.io/kube-apiserver:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:13.502701    7596 image.go:191] authn lookup for registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:13.566707    7596 image.go:191] authn lookup for registry.k8s.io/etcd:3.6.5-0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:13.621714    7596 image.go:191] authn lookup for registry.k8s.io/pause:3.10.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	W1217 01:54:13.674700    7596 image.go:191] authn lookup for registry.k8s.io/kube-scheduler:v1.35.0-beta.0 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1217 01:54:13.697703    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.35.0-beta.0
	W1217 01:54:13.731715    7596 image.go:191] authn lookup for registry.k8s.io/coredns/coredns:v1.13.1 (trying anon): error getting credentials - err: exit status 1, out: `error getting credentials - err: exit status 1, out: `A specified logon session does not exist. It may already have been terminated.``
	I1217 01:54:13.736716    7596 cache_images.go:118] "registry.k8s.io/kube-proxy:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-proxy:v1.35.0-beta.0" does not exist at hash "8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810" in container runtime
	I1217 01:54:13.736716    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 01:54:13.736716    7596 docker.go:338] Removing image: registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:13.742708    7596 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-proxy:v1.35.0-beta.0
	I1217 01:54:13.750699    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:13.782719    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 01:54:13.787705    7596 cache_images.go:118] "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" does not exist at hash "aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b" in container runtime
	I1217 01:54:13.787705    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 01:54:13.787705    7596 docker.go:338] Removing image: registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:13.788709    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:54:13.791716    7596 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	I1217 01:54:13.795701    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-proxy_v1.35.0-beta.0': No such file or directory
	I1217 01:54:13.795701    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 (25788928 bytes)
	I1217 01:54:13.809728    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:54:13.836716    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 01:54:13.843713    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:54:13.879721    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/etcd:3.6.5-0
	I1217 01:54:13.922728    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/pause:3.10.1
	I1217 01:54:13.964716    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0': No such file or directory
	I1217 01:54:13.964716    7596 cache_images.go:118] "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" does not exist at hash "45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc" in container runtime
	I1217 01:54:13.964716    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 01:54:13.964716    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 (27682304 bytes)
	I1217 01:54:13.964716    7596 docker.go:338] Removing image: registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:54:13.969714    7596 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	I1217 01:54:13.975721    7596 cache_images.go:118] "registry.k8s.io/etcd:3.6.5-0" needs transfer: "registry.k8s.io/etcd:3.6.5-0" does not exist at hash "a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1" in container runtime
	I1217 01:54:13.975721    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1217 01:54:13.975721    7596 docker.go:338] Removing image: registry.k8s.io/etcd:3.6.5-0
	I1217 01:54:13.981708    7596 ssh_runner.go:195] Run: docker rmi registry.k8s.io/etcd:3.6.5-0
	I1217 01:54:13.982758    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:13.988708    7596 cache_images.go:118] "registry.k8s.io/pause:3.10.1" needs transfer: "registry.k8s.io/pause:3.10.1" does not exist at hash "cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f" in container runtime
	I1217 01:54:13.988708    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1217 01:54:13.988708    7596 docker.go:338] Removing image: registry.k8s.io/pause:3.10.1
	I1217 01:54:13.993727    7596 ssh_runner.go:195] Run: docker rmi registry.k8s.io/pause:3.10.1
	I1217 01:54:14.029725    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:14.039713    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 01:54:14.044715    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:54:14.094453    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1217 01:54:14.100436    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:54:14.100436    7596 cache_images.go:118] "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" does not exist at hash "7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46" in container runtime
	I1217 01:54:14.100436    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 01:54:14.100436    7596 docker.go:338] Removing image: registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:14.106443    7596 ssh_runner.go:195] Run: docker rmi registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	I1217 01:54:14.140438    7596 ssh_runner.go:195] Run: docker image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:54:14.165435    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1217 01:54:14.175453    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1
	I1217 01:54:14.203448    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0': No such file or directory
	I1217 01:54:14.203448    7596 cache_images.go:118] "registry.k8s.io/coredns/coredns:v1.13.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.13.1" does not exist at hash "aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139" in container runtime
	I1217 01:54:14.203448    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/etcd_3.6.5-0: stat -c "%s %y" /var/lib/minikube/images/etcd_3.6.5-0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/etcd_3.6.5-0': No such file or directory
	I1217 01:54:14.203448    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 (23131648 bytes)
	I1217 01:54:14.203448    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 01:54:14.203448    7596 docker.go:338] Removing image: registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:14.203448    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 --> /var/lib/minikube/images/etcd_3.6.5-0 (22883840 bytes)
	I1217 01:54:14.207438    7596 ssh_runner.go:195] Run: docker rmi registry.k8s.io/coredns/coredns:v1.13.1
	I1217 01:54:14.281037    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 01:54:14.286030    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:54:14.298043    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/pause_3.10.1: stat -c "%s %y" /var/lib/minikube/images/pause_3.10.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/pause_3.10.1': No such file or directory
	I1217 01:54:14.298043    7596 cache_images.go:118] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I1217 01:54:14.298043    7596 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1217 01:54:14.298043    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 --> /var/lib/minikube/images/pause_3.10.1 (321024 bytes)
	I1217 01:54:14.298043    7596 docker.go:338] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:54:14.304050    7596 ssh_runner.go:195] Run: docker rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 01:54:14.396050    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 01:54:14.396050    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: stat -c "%s %y" /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0': No such file or directory
	I1217 01:54:14.396050    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 --> /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 (17239040 bytes)
	I1217 01:54:14.402046    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:54:14.473051    7596 cache_images.go:291] Loading image from: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1217 01:54:14.479061    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:54:14.490037    7596 docker.go:305] Loading image: /var/lib/minikube/images/pause_3.10.1
	I1217 01:54:14.490037    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/pause_3.10.1 | docker load"
	I1217 01:54:14.597063    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/coredns_v1.13.1: stat -c "%s %y" /var/lib/minikube/images/coredns_v1.13.1: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/coredns_v1.13.1': No such file or directory
	I1217 01:54:14.597063    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 --> /var/lib/minikube/images/coredns_v1.13.1 (23562752 bytes)
	I1217 01:54:14.690055    7596 ssh_runner.go:352] existence check for /var/lib/minikube/images/storage-provisioner_v5: stat -c "%s %y" /var/lib/minikube/images/storage-provisioner_v5: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/images/storage-provisioner_v5': No such file or directory
	I1217 01:54:14.690055    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 --> /var/lib/minikube/images/storage-provisioner_v5 (9060352 bytes)
	I1217 01:54:14.863061    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 from cache
	I1217 01:54:15.780835    7596 docker.go:305] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I1217 01:54:15.780835    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/storage-provisioner_v5 | docker load"
	I1217 01:54:16.767171    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 from cache
	I1217 01:54:16.767171    7596 docker.go:305] Loading image: /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0
	I1217 01:54:16.767171    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load"
	I1217 01:54:20.806083    7596 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-proxy_v1.35.0-beta.0 | docker load": (4.0388525s)
	I1217 01:54:20.806083    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 from cache
	I1217 01:54:20.806083    7596 docker.go:305] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0
	I1217 01:54:20.806083    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load"
	I1217 01:54:25.988797    7596 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-controller-manager_v1.35.0-beta.0 | docker load": (5.1826381s)
	I1217 01:54:25.988797    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 from cache
	I1217 01:54:25.988797    7596 docker.go:305] Loading image: /var/lib/minikube/images/etcd_3.6.5-0
	I1217 01:54:25.988797    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load"
	I1217 01:54:38.416317    7596 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/etcd_3.6.5-0 | docker load": (12.4273369s)
	I1217 01:54:38.416317    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 from cache
	I1217 01:54:38.416317    7596 docker.go:305] Loading image: /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0
	I1217 01:54:38.416317    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load"
	I1217 01:54:40.575419    7596 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-apiserver_v1.35.0-beta.0 | docker load": (2.1590709s)
	I1217 01:54:40.575419    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 from cache
	I1217 01:54:40.575962    7596 docker.go:305] Loading image: /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0
	I1217 01:54:40.576103    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load"
	I1217 01:54:41.890295    7596 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/kube-scheduler_v1.35.0-beta.0 | docker load": (1.3141722s)
	I1217 01:54:41.890295    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 from cache
	I1217 01:54:41.890829    7596 docker.go:305] Loading image: /var/lib/minikube/images/coredns_v1.13.1
	I1217 01:54:41.890897    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load"
	I1217 01:54:43.838783    7596 ssh_runner.go:235] Completed: /bin/bash -c "sudo cat /var/lib/minikube/images/coredns_v1.13.1 | docker load": (1.9478574s)
	I1217 01:54:43.838783    7596 cache_images.go:323] Transferred and loaded C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 from cache
	I1217 01:54:43.838783    7596 cache_images.go:125] Successfully loaded all cached images
	I1217 01:54:43.838783    7596 cache_images.go:94] duration metric: took 30.5746282s to LoadCachedImages
	I1217 01:54:43.838783    7596 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 01:54:43.839780    7596 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:54:43.842775    7596 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:54:43.927176    7596 cni.go:84] Creating CNI manager for ""
	I1217 01:54:43.927176    7596 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:54:43.927176    7596 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 01:54:43.928175    7596 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184000 NodeName:no-preload-184000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:54:43.928175    7596 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-184000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:54:43.934165    7596 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:54:43.955284    7596 binaries.go:54] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.35.0-beta.0': No such file or directory
	
	Initiating transfer...
	I1217 01:54:43.959818    7596 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:54:43.977247    7596 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm
	I1217 01:54:43.977247    7596 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet
	I1217 01:54:43.977247    7596 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl
	I1217 01:54:45.303010    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:54:45.326020    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet
	I1217 01:54:45.333013    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl
	I1217 01:54:45.334016    7596 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet': No such file or directory
	I1217 01:54:45.334016    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubelet --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubelet (58106148 bytes)
	I1217 01:54:45.339017    7596 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubectl': No such file or directory
	I1217 01:54:45.340045    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubectl --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl (58589368 bytes)
	I1217 01:54:45.534033    7596 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm
	I1217 01:54:45.633764    7596 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm': No such file or directory
	I1217 01:54:45.634751    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\linux\amd64\v1.35.0-beta.0/kubeadm --> /var/lib/minikube/binaries/v1.35.0-beta.0/kubeadm (72364216 bytes)
	I1217 01:54:47.365678    7596 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:54:47.379688    7596 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 01:54:47.402381    7596 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:54:47.426226    7596 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 01:54:47.459464    7596 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:54:47.472038    7596 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:54:47.499556    7596 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:54:47.665284    7596 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:54:47.695948    7596 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000 for IP: 192.168.94.2
	I1217 01:54:47.695948    7596 certs.go:195] generating shared ca certs ...
	I1217 01:54:47.695948    7596 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:54:47.696747    7596 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:54:47.696786    7596 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:54:47.696786    7596 certs.go:257] generating profile certs ...
	I1217 01:54:47.697429    7596 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.key
	I1217 01:54:47.697568    7596 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.crt with IP's: []
	I1217 01:54:47.762241    7596 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.crt ...
	I1217 01:54:47.762241    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.crt: {Name:mk75fda15697199653615a2f0a82aaea0b3c44c2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:54:47.762475    7596 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.key ...
	I1217 01:54:47.762475    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.key: {Name:mk381f0a59eac36c4359178da45f877a3d0cddfe Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:54:47.763573    7596 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key.d162c569
	I1217 01:54:47.764303    7596 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt.d162c569 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.94.2]
	I1217 01:54:47.792691    7596 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt.d162c569 ...
	I1217 01:54:47.792691    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt.d162c569: {Name:mkdecbe15b5453e5bf64ba51286d36ac64a58c1b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:54:47.793695    7596 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key.d162c569 ...
	I1217 01:54:47.794692    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key.d162c569: {Name:mk0dd21e160be4383f6c1ca61df16e8381d03014 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:54:47.794958    7596 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt.d162c569 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt
	I1217 01:54:47.809948    7596 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key.d162c569 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key
	I1217 01:54:47.810566    7596 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key
	I1217 01:54:47.810566    7596 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.crt with IP's: []
	I1217 01:54:47.929620    7596 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.crt ...
	I1217 01:54:47.929620    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.crt: {Name:mk10d3e59996baeac7a6572df23e14355d4e2bce Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:54:47.930615    7596 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key ...
	I1217 01:54:47.930615    7596 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key: {Name:mkb734eefe35eddff0743180f86b10202192fc77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:54:47.943611    7596 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:54:47.944345    7596 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:54:47.944345    7596 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:54:47.944345    7596 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:54:47.944345    7596 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:54:47.944963    7596 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:54:47.945102    7596 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:54:47.945707    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:54:47.977308    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:54:48.008715    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:54:48.039126    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:54:48.077398    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:54:48.109587    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:54:48.138392    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:54:48.169855    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:54:48.198673    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:54:48.229745    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:54:48.264900    7596 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:54:48.292534    7596 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:54:48.320346    7596 ssh_runner.go:195] Run: openssl version
	I1217 01:54:48.334613    7596 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:54:48.353324    7596 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:54:48.374228    7596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:54:48.383350    7596 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:54:48.387229    7596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:54:48.434034    7596 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:54:48.453459    7596 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:54:48.474667    7596 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:54:48.493490    7596 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:54:48.510510    7596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:54:48.517490    7596 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:54:48.520489    7596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:54:48.570814    7596 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:54:48.586828    7596 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 01:54:48.606900    7596 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:54:48.624103    7596 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:54:48.642799    7596 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:54:48.651681    7596 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:54:48.656109    7596 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:54:48.708141    7596 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:54:48.726194    7596 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:54:48.746785    7596 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:54:48.753951    7596 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:54:48.754947    7596 kubeadm.go:401] StartCluster: {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:54:48.757949    7596 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:54:48.790670    7596 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:54:48.814414    7596 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:54:48.828157    7596 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:54:48.832704    7596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:54:48.849891    7596 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:54:48.849933    7596 kubeadm.go:158] found existing configuration files:
	
	I1217 01:54:48.854196    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:54:48.872607    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:54:48.877497    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:54:48.895607    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:54:48.914314    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:54:48.919009    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:54:48.936611    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:54:48.952634    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:54:48.958909    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:54:48.977922    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:54:48.992700    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:54:48.999203    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:54:49.021295    7596 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:54:49.140062    7596 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:54:49.227511    7596 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:54:49.348113    7596 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:58:51.115615    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:58:51.115718    7596 kubeadm.go:319] 
	I1217 01:58:51.115916    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:58:51.121578    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:58:51.122136    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 01:58:51.122857    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 01:58:51.123993    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 01:58:51.124691    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] OS: Linux
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:58:51.125946    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:58:51.126099    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:58:51.126099    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:58:51.128573    7596 out.go:252]   - Generating certificates and keys ...
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:58:51.129197    7596 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:58:51.129388    7596 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:58:51.129558    7596 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:58:51.129682    7596 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:58:51.130781    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:58:51.130943    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:58:51.131040    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:58:51.131231    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:58:51.131356    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:58:51.131482    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:58:51.131482    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:58:51.133818    7596 out.go:252]   - Booting up control plane ...
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:58:51.135780    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002324195s
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 
	W1217 01:58:51.136777    7596 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002324195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002324195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:58:51.139887    7596 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 01:58:51.605403    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:58:51.627327    7596 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:58:51.634266    7596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:58:51.651778    7596 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:58:51.651778    7596 kubeadm.go:158] found existing configuration files:
	
	I1217 01:58:51.657261    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:58:51.670434    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:58:51.674365    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:58:51.692907    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:58:51.707259    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:58:51.711851    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:58:51.731617    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.746650    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:58:51.750583    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.769267    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:58:51.784345    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:58:51.789034    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:58:51.805733    7596 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:58:51.926943    7596 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:58:52.006918    7596 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:58:52.107226    7596 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:02:52.901103    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:02:52.901187    7596 kubeadm.go:319] 
	I1217 02:02:52.901405    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:02:52.906962    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:02:52.907051    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:02:52.907051    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:02:52.907664    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:02:52.908322    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:02:52.908447    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:02:52.908571    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:02:52.908730    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:02:52.908849    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:02:52.909000    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] OS: Linux
	I1217 02:02:52.909731    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:02:52.910342    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:02:52.911109    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:02:52.911252    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:02:52.914099    7596 out.go:252]   - Generating certificates and keys ...
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:02:52.915926    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:02:52.916016    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:02:52.918827    7596 out.go:252]   - Booting up control plane ...
	I1217 02:02:52.918827    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:02:52.920875    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000516808s
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 
	I1217 02:02:52.921883    7596 kubeadm.go:403] duration metric: took 8m4.1597601s to StartCluster
	I1217 02:02:52.921883    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:52.925883    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:52.985042    7596 cri.go:89] found id: ""
	I1217 02:02:52.985042    7596 logs.go:282] 0 containers: []
	W1217 02:02:52.985042    7596 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:52.985042    7596 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:52.989497    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:53.035444    7596 cri.go:89] found id: ""
	I1217 02:02:53.035444    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.035444    7596 logs.go:284] No container was found matching "etcd"
	I1217 02:02:53.035444    7596 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:53.040633    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:53.090166    7596 cri.go:89] found id: ""
	I1217 02:02:53.090166    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.090166    7596 logs.go:284] No container was found matching "coredns"
	I1217 02:02:53.090166    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:53.095276    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:53.155229    7596 cri.go:89] found id: ""
	I1217 02:02:53.155292    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.155292    7596 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:53.155292    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:53.159579    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:53.201389    7596 cri.go:89] found id: ""
	I1217 02:02:53.201389    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.201389    7596 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:53.201389    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:53.206627    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:53.251727    7596 cri.go:89] found id: ""
	I1217 02:02:53.251807    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.251807    7596 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:53.251807    7596 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:53.255868    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:53.296927    7596 cri.go:89] found id: ""
	I1217 02:02:53.297002    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.297002    7596 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:53.297002    7596 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:53.297002    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:53.362489    7596 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:53.362489    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:53.402379    7596 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:53.402379    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:53.486459    7596 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:53.486459    7596 logs.go:123] Gathering logs for Docker ...
	I1217 02:02:53.486459    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:02:53.519898    7596 logs.go:123] Gathering logs for container status ...
	I1217 02:02:53.519898    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:02:53.571631    7596 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:02:53.571705    7596 out.go:285] * 
	* 
	W1217 02:02:53.571763    7596 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.571763    7596 out.go:285] * 
	* 
	W1217 02:02:53.573684    7596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:02:53.577599    7596 out.go:203] 
	W1217 02:02:53.580937    7596 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.580937    7596 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:02:53.580937    7596 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:02:53.584112    7596 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-184000
helpers_test.go:244: (dbg) docker inspect no-preload-184000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed",
	        "Created": "2025-12-17T01:54:01.802457191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 400896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:54:02.102156548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hosts",
	        "LogPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed-json.log",
	        "Name": "/no-preload-184000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-184000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-184000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "no-preload-184000",
	                "Source": "/var/lib/docker/volumes/no-preload-184000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-184000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-184000",
	                "name.minikube.sigs.k8s.io": "no-preload-184000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "878415a4285bb4e9322b366762510a9c3489066b0ef84b5d48358f5f81e082bf",
	            "SandboxKey": "/var/run/docker/netns/878415a4285b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62904"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62905"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62907"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62908"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-184000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6adb91d102dfa92bfa154127e93e39401be06a5d21df5043f3e85e012e93e321",
	                    "EndpointID": "8e3f71a707f374d60db9e819d8097a078527854d326de7a03065e5d1fcc8c8bd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-184000",
	                        "335cbfb80690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 6 (593.9793ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:02:54.644209    2012 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25: (1.1515055s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p old-k8s-version-044000 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0        │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-653800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ stop    │ -p embed-certs-653800 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-653800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ stop    │ -p default-k8s-diff-port-278200 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-278200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:56:50
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:56:50.801354   10580 out.go:360] Setting OutFile to fd 1172 ...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.842347   10580 out.go:374] Setting ErrFile to fd 824...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.868487   10580 out.go:368] Setting JSON to false
	I1217 01:56:50.873633   10580 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8199,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:56:50.873795   10580 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:56:50.877230   10580 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:56:50.879602   10580 notify.go:221] Checking for updates...
	I1217 01:56:50.882592   10580 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:56:50.886357   10580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:56:50.888496   10580 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:56:50.891194   10580 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:56:50.892900   10580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "default-k8s-diff-port-278200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "embed-certs-653800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.898014   10580 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:56:50.898014   10580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:56:51.023603   10580 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:56:51.027600   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.269309   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.250186339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.271302   10580 out.go:179] * Using the docker driver based on user configuration
	I1217 01:56:51.274302   10580 start.go:309] selected driver: docker
	I1217 01:56:51.274302   10580 start.go:927] validating driver "docker" against <nil>
	I1217 01:56:51.274302   10580 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:56:51.315871   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.584149   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.563534441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.584149   10580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:56:51.584149   10580 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:56:51.585155   10580 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:56:51.589148   10580 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:56:51.590146   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:56:51.591150   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:56:51.591150   10580 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:56:51.591150   10580 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:56:51.593150   10580 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 01:56:51.596146   10580 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:56:51.597151   10580 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:56:51.600152   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:51.600152   10580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:56:51.600152   10580 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 01:56:51.600152   10580 cache.go:65] Caching tarball of preloaded images
	I1217 01:56:51.600152   10580 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:56:51.600152   10580 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 01:56:51.601151   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:56:51.601151   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json: {Name:mkf80e0956bcb8fe665f18deea862644aea3658c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:56:51.682130   10580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:56:51.682186   10580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:56:51.682226   10580 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:56:51.682296   10580 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:56:51.682463   10580 start.go:364] duration metric: took 127.8µs to acquireMachinesLock for "newest-cni-383500"
	I1217 01:56:51.682643   10580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:56:51.682643   10580 start.go:125] createHost starting for "" (driver="docker")
	W1217 01:56:50.658968   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:53.155347   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:56:50.357392    6652 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63284/healthz ...
	I1217 01:56:50.369628    6652 api_server.go:279] https://127.0.0.1:63284/healthz returned 200:
	ok
	I1217 01:56:50.373212    6652 api_server.go:141] control plane version: v1.34.2
	I1217 01:56:50.373212    6652 api_server.go:131] duration metric: took 1.5164341s to wait for apiserver health ...
	I1217 01:56:50.373212    6652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:56:50.383881    6652 system_pods.go:59] 8 kube-system pods found
	I1217 01:56:50.383935    6652 system_pods.go:61] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.383972    6652 system_pods.go:61] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.383972    6652 system_pods.go:61] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.383972    6652 system_pods.go:61] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.383972    6652 system_pods.go:74] duration metric: took 10.76ms to wait for pod list to return data ...
	I1217 01:56:50.383972    6652 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:56:50.472293    6652 default_sa.go:45] found service account: "default"
	I1217 01:56:50.472293    6652 default_sa.go:55] duration metric: took 88.3195ms for default service account to be created ...
	I1217 01:56:50.472293    6652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:56:50.550966    6652 system_pods.go:86] 8 kube-system pods found
	I1217 01:56:50.550966    6652 system_pods.go:89] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.551963    6652 system_pods.go:89] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.551963    6652 system_pods.go:89] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.551963    6652 system_pods.go:89] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.551963    6652 system_pods.go:126] duration metric: took 79.6691ms to wait for k8s-apps to be running ...
	I1217 01:56:50.551963    6652 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:56:50.558963    6652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:56:50.647965    6652 system_svc.go:56] duration metric: took 96.0006ms WaitForService to wait for kubelet
	I1217 01:56:50.647965    6652 kubeadm.go:587] duration metric: took 11.8438008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:56:50.647965    6652 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:56:50.655959    6652 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1217 01:56:50.655959    6652 node_conditions.go:123] node cpu capacity is 16
	I1217 01:56:50.655959    6652 node_conditions.go:105] duration metric: took 7.9936ms to run NodePressure ...
	I1217 01:56:50.655959    6652 start.go:242] waiting for startup goroutines ...
	I1217 01:56:50.655959    6652 start.go:247] waiting for cluster config update ...
	I1217 01:56:50.655959    6652 start.go:256] writing updated cluster config ...
	I1217 01:56:50.662974    6652 ssh_runner.go:195] Run: rm -f paused
	I1217 01:56:50.670974    6652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:56:50.679961    6652 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 01:56:52.758113    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:56:51.685685   10580 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:56:51.686059   10580 start.go:159] libmachine.API.Create for "newest-cni-383500" (driver="docker")
	I1217 01:56:51.686127   10580 client.go:173] LocalClient.Create starting
	I1217 01:56:51.686740   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.687153   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.691438   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:56:51.737765   10580 cli_runner.go:211] docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:56:51.740755   10580 network_create.go:284] running [docker network inspect newest-cni-383500] to gather additional debugging logs...
	I1217 01:56:51.740755   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500
	W1217 01:56:51.801443   10580 cli_runner.go:211] docker network inspect newest-cni-383500 returned with exit code 1
	I1217 01:56:51.802437   10580 network_create.go:287] error running [docker network inspect newest-cni-383500]: docker network inspect newest-cni-383500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-383500 not found
	I1217 01:56:51.802437   10580 network_create.go:289] output of [docker network inspect newest-cni-383500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-383500 not found
	
	** /stderr **
	I1217 01:56:51.804999   10580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:56:51.880941   10580 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.896006   10580 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.908781   10580 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000faab70}
	I1217 01:56:51.908781   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1217 01:56:51.911893   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	W1217 01:56:51.964261   10580 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500 returned with exit code 1
	W1217 01:56:51.964261   10580 network_create.go:149] failed to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1217 01:56:51.964261   10580 network_create.go:116] failed to create docker network newest-cni-383500 192.168.67.0/24, will retry: subnet is taken
	I1217 01:56:51.989641   10580 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:52.003768   10580 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f5b5c0}
	I1217 01:56:52.003768   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 01:56:52.007075   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	I1217 01:56:52.149371   10580 network_create.go:108] docker network newest-cni-383500 192.168.76.0/24 created
	I1217 01:56:52.149371   10580 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-383500" container
	I1217 01:56:52.161020   10580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:56:52.221477   10580 cli_runner.go:164] Run: docker volume create newest-cni-383500 --label name.minikube.sigs.k8s.io=newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:56:52.277863   10580 oci.go:103] Successfully created a docker volume newest-cni-383500
	I1217 01:56:52.281622   10580 cli_runner.go:164] Run: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:56:53.597934   10580 cli_runner.go:217] Completed: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3162925s)
	I1217 01:56:53.597934   10580 oci.go:107] Successfully prepared a docker volume newest-cni-383500
	I1217 01:56:53.597934   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:53.597934   10580 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:56:53.602121   10580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	W1217 01:56:55.164284   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:57.657496   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:55.197325    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:57.691480    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:59.691833    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:00.414359   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:57:01.221784   10700 pod_ready.go:94] pod "coredns-66bc5c9577-rkqgn" is "Ready"
	I1217 01:57:01.221832   10700 pod_ready.go:86] duration metric: took 31.57611s for pod "coredns-66bc5c9577-rkqgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.231015   10700 pod_ready.go:83] waiting for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.305989   10700 pod_ready.go:94] pod "etcd-embed-certs-653800" is "Ready"
	I1217 01:57:01.306038   10700 pod_ready.go:86] duration metric: took 74.9721ms for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.362260   10700 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.373797   10700 pod_ready.go:94] pod "kube-apiserver-embed-certs-653800" is "Ready"
	I1217 01:57:01.373797   10700 pod_ready.go:86] duration metric: took 11.4721ms for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.379508   10700 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.421736   10700 pod_ready.go:94] pod "kube-controller-manager-embed-certs-653800" is "Ready"
	I1217 01:57:01.421778   10700 pod_ready.go:86] duration metric: took 42.2686ms for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.549272   10700 pod_ready.go:83] waiting for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.831507   10700 pod_ready.go:94] pod "kube-proxy-tnkvj" is "Ready"
	I1217 01:57:02.832053   10700 pod_ready.go:86] duration metric: took 282.7765ms for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.837864   10700 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850194   10700 pod_ready.go:94] pod "kube-scheduler-embed-certs-653800" is "Ready"
	I1217 01:57:02.850247   10700 pod_ready.go:86] duration metric: took 12.3828ms for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850295   10700 pod_ready.go:40] duration metric: took 33.2150881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:02.959538   10700 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:03.043739   10700 out.go:179] * Done! kubectl is now configured to use "embed-certs-653800" cluster and "default" namespace by default
	W1217 01:57:01.693305    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:04.195654    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:06.294817    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:08.700814    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:10.483352   10580 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (16.8803148s)
	I1217 01:57:10.483443   10580 kic.go:203] duration metric: took 16.8852234s to extract preloaded images to volume ...
	I1217 01:57:10.489300   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:57:10.753192   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:57:10.732557974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:57:10.757222   10580 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	W1217 01:57:11.205059    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:13.689668    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:11.047255   10580 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-383500 --name newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-383500 --network newest-cni-383500 --ip 192.168.76.2 --volume newest-cni-383500:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:57:11.789740   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Running}}
	I1217 01:57:11.849518   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:11.908509   10580 cli_runner.go:164] Run: docker exec newest-cni-383500 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:57:12.021676   10580 oci.go:144] the created container "newest-cni-383500" has a running status.
	I1217 01:57:12.021676   10580 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:12.131609   10580 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:57:12.208714   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:12.272788   10580 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:57:12.273496   10580 kic_runner.go:114] Args: [docker exec --privileged newest-cni-383500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:57:12.387830   10580 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:14.496810   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:14.552924   10580 machine.go:94] provisionDockerMachine start ...
	I1217 01:57:14.556597   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.614668   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.628589   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.628589   10580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:57:14.803670   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:14.803752   10580 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 01:57:14.806966   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.872659   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.873288   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.873288   10580 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 01:57:15.070847   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:15.076754   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.138180   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.138558   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.138558   10580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:57:15.322611   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:57:15.322611   10580 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:57:15.322611   10580 ubuntu.go:190] setting up certificates
	I1217 01:57:15.322611   10580 provision.go:84] configureAuth start
	I1217 01:57:15.327543   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:15.379974   10580 provision.go:143] copyHostCerts
	I1217 01:57:15.380366   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:57:15.380414   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:57:15.380832   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:57:15.382184   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:57:15.382226   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:57:15.382581   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:57:15.383683   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:57:15.383736   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:57:15.384159   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:57:15.384159   10580 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 01:57:15.508571   10580 provision.go:177] copyRemoteCerts
	I1217 01:57:15.512616   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:57:15.515422   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.573004   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:15.707286   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:57:15.746639   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:57:15.775638   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:57:15.812045   10580 provision.go:87] duration metric: took 488.4307ms to configureAuth
	I1217 01:57:15.812045   10580 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:57:15.812045   10580 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:57:15.815050   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	W1217 01:57:15.691769    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:17.697151    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:15.867044   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.867044   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.867044   10580 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:57:16.041586   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:57:16.041586   10580 ubuntu.go:71] root file system type: overlay
	I1217 01:57:16.041586   10580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:57:16.045689   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.104012   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.104611   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.104703   10580 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:57:16.297193   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:57:16.300844   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.360905   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.361498   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.361540   10580 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:57:18.042542   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:57:16.287130539 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1217 01:57:18.042542   10580 machine.go:97] duration metric: took 3.4895662s to provisionDockerMachine
	I1217 01:57:18.042542   10580 client.go:176] duration metric: took 26.3559894s to LocalClient.Create
	I1217 01:57:18.042542   10580 start.go:167] duration metric: took 26.3560942s to libmachine.API.Create "newest-cni-383500"
	I1217 01:57:18.042542   10580 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 01:57:18.042542   10580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:57:18.050002   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:57:18.053976   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.112173   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.256941   10580 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:57:18.268729   10580 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:57:18.268729   10580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 01:57:18.269469   10580 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 01:57:18.273808   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:57:18.289831   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 01:57:18.317384   10580 start.go:296] duration metric: took 274.8381ms for postStartSetup
	I1217 01:57:18.322385   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.369389   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:57:18.375387   10580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:57:18.381078   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.432604   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.561382   10580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:57:18.571573   10580 start.go:128] duration metric: took 26.8885332s to createHost
	I1217 01:57:18.571573   10580 start.go:83] releasing machines lock for "newest-cni-383500", held for 26.8886481s
	I1217 01:57:18.575096   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.630669   10580 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 01:57:18.634666   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.635666   10580 ssh_runner.go:195] Run: cat /version.json
	I1217 01:57:18.639677   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 01:57:18.859792   10580 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 01:57:18.877228   10580 ssh_runner.go:195] Run: systemctl --version
	I1217 01:57:18.892439   10580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:57:18.900947   10580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:57:18.905555   10580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:57:18.954841   10580 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:57:18.954952   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:18.955015   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:18.955015   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:18.991199   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1217 01:57:19.008171   10580 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 01:57:19.008230   10580 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 01:57:19.013119   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:57:19.028717   10580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:57:19.032858   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:57:19.052914   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.072904   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:57:19.095550   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.115854   10580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:57:19.132848   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:57:19.151846   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:57:19.172853   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:57:19.193907   10580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:57:19.210892   10580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:57:19.227892   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:19.399536   10580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:57:19.601453   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:19.601453   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:19.605450   10580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:57:19.629461   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.656299   10580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:57:19.736745   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.764285   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:57:19.789001   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:19.815453   10580 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:57:19.827238   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:57:19.842026   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 01:57:19.874597   10580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:57:20.041348   10580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:57:20.226962   10580 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:57:20.226962   10580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:57:20.254551   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:57:20.278555   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:20.468211   10580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:57:21.513591   10580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0453647s)
	I1217 01:57:21.520768   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:57:21.544117   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:57:21.578618   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:21.602252   10580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:57:21.754251   10580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:57:21.925790   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.049631   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:57:22.080439   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:57:22.102178   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.247555   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:57:22.356045   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:22.374818   10580 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:57:22.380720   10580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:57:22.388747   10580 start.go:564] Will wait 60s for crictl version
	I1217 01:57:22.393402   10580 ssh_runner.go:195] Run: which crictl
	I1217 01:57:22.405105   10580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:57:22.456110   10580 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:57:22.460422   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.517812   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.562431   10580 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 01:57:22.566477   10580 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 01:57:22.701109   10580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:57:22.707802   10580 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:57:22.717558   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:22.737642   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:22.798183   10580 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 01:57:20.222966    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:22.694494    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:23.189475    6652 pod_ready.go:94] pod "coredns-66bc5c9577-mq7nr" is "Ready"
	I1217 01:57:23.189475    6652 pod_ready.go:86] duration metric: took 32.5090332s for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.194104    6652 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.202184    6652 pod_ready.go:94] pod "etcd-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.202184    6652 pod_ready.go:86] duration metric: took 8.0443ms for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.206828    6652 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.213978    6652 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.213978    6652 pod_ready.go:86] duration metric: took 7.1505ms for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.217306    6652 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.387857    6652 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.387920    6652 pod_ready.go:86] duration metric: took 170.6119ms for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.587111    6652 pod_ready.go:83] waiting for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.985373    6652 pod_ready.go:94] pod "kube-proxy-hp6zw" is "Ready"
	I1217 01:57:23.986730    6652 pod_ready.go:86] duration metric: took 399.613ms for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.201566    6652 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586537    6652 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:24.586586    6652 pod_ready.go:86] duration metric: took 385.0143ms for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586640    6652 pod_ready.go:40] duration metric: took 33.9151651s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:24.687654    6652 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:25.088107    6652 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-278200" cluster and "default" namespace by default
	I1217 01:57:22.800238   10580 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:57:22.800267   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:57:22.804334   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.840199   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.840199   10580 docker.go:621] Images already preloaded, skipping extraction
	I1217 01:57:22.843860   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.875886   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.875953   10580 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:57:22.876007   10580 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 01:57:22.876138   10580 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:57:22.881452   10580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:57:22.963596   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:57:22.963596   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:57:22.963596   10580 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:57:22.963596   10580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:57:22.964766   10580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:57:22.971170   10580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:57:22.988148   10580 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:57:22.993571   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:57:23.008239   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 01:57:23.168781   10580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:57:23.268253   10580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 01:57:23.292920   10580 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:57:23.298948   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:23.555705   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:23.774461   10580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:57:23.797469   10580 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 01:57:23.797574   10580 certs.go:195] generating shared ca certs ...
	I1217 01:57:23.797612   10580 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:57:23.801985   10580 certs.go:257] generating profile certs ...
	I1217 01:57:23.801985   10580 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 01:57:23.802608   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt with IP's: []
	I1217 01:57:23.893499   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt ...
	I1217 01:57:23.893499   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt: {Name:mk018179fa6276f140d3c484dc77b112ade6a239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.894491   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key ...
	I1217 01:57:23.894491   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key: {Name:mkf03a928d0759f4e80338ae1a94ef05274842bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.895493   10580 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 01:57:23.895493   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 01:57:23.940939   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 ...
	I1217 01:57:23.940939   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8: {Name:mk793887fd39b61b0148eb1aef73edce147dd7af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 ...
	I1217 01:57:23.941938   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8: {Name:mk75e8d1cb53d5e553bcfb51860f15346eec2f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt
	I1217 01:57:23.956750   10580 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key
	I1217 01:57:23.958193   10580 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 01:57:23.958415   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt with IP's: []
	I1217 01:57:24.067269   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt ...
	I1217 01:57:24.067269   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt: {Name:mk21db782682ec857bcf614d6ee83e5820624361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.068316   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key ...
	I1217 01:57:24.068316   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key: {Name:mk4bcb88a5770958ea52d64f6df1b6838f8b5fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.097118   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:57:24.097649   10580 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:57:24.097791   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:57:24.098812   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:57:24.100115   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:57:24.135459   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:57:24.165011   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:57:24.192410   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:57:24.481059   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:57:25.003692   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:57:25.038428   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:57:25.065081   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:57:25.099226   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:57:25.144094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:57:25.174094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:57:25.210940   10580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:57:25.237951   10580 ssh_runner.go:195] Run: openssl version
	I1217 01:57:25.254946   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.276935   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:57:25.294948   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.302943   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.306934   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.370952   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.390944   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.415186   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.434956   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:57:25.453960   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.460961   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.464957   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.515968   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:57:25.532957   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:57:25.547952   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.565954   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:57:25.583961   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.591966   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.596965   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.654221   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:57:25.671221   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 01:57:25.688222   10580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:57:25.696236   10580 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:57:25.696236   10580 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:57:25.699225   10580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:57:25.732231   10580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:57:25.750219   10580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:57:25.764216   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:25.768221   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:25.782223   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:25.782223   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:25.787226   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:25.811226   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:25.817308   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:25.846154   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:25.861155   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:25.865166   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:25.882164   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.894161   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:25.898177   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.916173   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:25.936694   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:25.940687   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:25.956687   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:26.100043   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:57:26.198370   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:57:26.302677   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:58:51.115615    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:58:51.115718    7596 kubeadm.go:319] 
	I1217 01:58:51.115916    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:58:51.121578    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:58:51.122136    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 01:58:51.122857    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 01:58:51.123993    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 01:58:51.124691    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] OS: Linux
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:58:51.125946    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:58:51.126099    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:58:51.126099    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:58:51.128573    7596 out.go:252]   - Generating certificates and keys ...
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:58:51.129197    7596 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:58:51.129388    7596 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:58:51.129558    7596 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:58:51.129682    7596 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:58:51.130781    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:58:51.130943    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:58:51.131040    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:58:51.131231    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:58:51.131356    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:58:51.131482    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:58:51.131482    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:58:51.133818    7596 out.go:252]   - Booting up control plane ...
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:58:51.135780    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002324195s
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 
	W1217 01:58:51.136777    7596 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002324195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:58:51.139887    7596 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 01:58:51.605403    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:58:51.627327    7596 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:58:51.634266    7596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:58:51.651778    7596 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:58:51.651778    7596 kubeadm.go:158] found existing configuration files:
	
	I1217 01:58:51.657261    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:58:51.670434    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:58:51.674365    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:58:51.692907    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:58:51.707259    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:58:51.711851    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:58:51.731617    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.746650    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:58:51.750583    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.769267    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:58:51.784345    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:58:51.789034    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:58:51.805733    7596 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:58:51.926943    7596 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:58:52.006918    7596 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:58:52.107226    7596 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:27.963444   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:27.963444   10580 kubeadm.go:319] 
	I1217 02:01:27.963616   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:27.972023   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:01:27.973054   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:01:27.973281   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:01:27.973281   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:01:27.973879   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:01:27.975176   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:01:27.975817   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] OS: Linux
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:01:27.976495   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:01:27.977232   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:01:27.977413   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:01:27.979976   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 02:01:27.981175   10580 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 02:01:27.981278   10580 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.982128   10580 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 02:01:27.982285   10580 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 02:01:27.982463   10580 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 02:01:27.982622   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:01:27.983316   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:01:27.983431   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:01:27.985605   10580 out.go:252]   - Booting up control plane ...
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:01:27.986216   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:01:27.986315   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:27.987339   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000575784s
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987339   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987913   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 
	W1217 02:01:27.987913   10580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000575784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 02:01:27.992425   10580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 02:01:28.454931   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:01:28.474574   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:01:28.479997   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:01:28.494933   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:01:28.494933   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 02:01:28.501352   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:01:28.516227   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:01:28.521874   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:01:28.540752   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:01:28.554535   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:01:28.559019   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:01:28.577479   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.592775   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:01:28.596757   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.614687   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:01:28.629343   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:01:28.633759   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:01:28.653776   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:01:28.777097   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 02:01:28.860083   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:28.960806   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:02:52.901103    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:02:52.901187    7596 kubeadm.go:319] 
	I1217 02:02:52.901405    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:02:52.906962    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:02:52.907051    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:02:52.907051    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:02:52.907664    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:02:52.908322    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:02:52.908447    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:02:52.908571    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:02:52.908730    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:02:52.908849    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:02:52.909000    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] OS: Linux
	I1217 02:02:52.909731    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:02:52.910342    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:02:52.911109    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:02:52.911252    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:02:52.914099    7596 out.go:252]   - Generating certificates and keys ...
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:02:52.915926    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:02:52.916016    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:02:52.918827    7596 out.go:252]   - Booting up control plane ...
	I1217 02:02:52.918827    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:02:52.920875    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000516808s
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 
	I1217 02:02:52.921883    7596 kubeadm.go:403] duration metric: took 8m4.1597601s to StartCluster
	I1217 02:02:52.921883    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:52.925883    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:52.985042    7596 cri.go:89] found id: ""
	I1217 02:02:52.985042    7596 logs.go:282] 0 containers: []
	W1217 02:02:52.985042    7596 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:52.985042    7596 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:52.989497    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:53.035444    7596 cri.go:89] found id: ""
	I1217 02:02:53.035444    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.035444    7596 logs.go:284] No container was found matching "etcd"
	I1217 02:02:53.035444    7596 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:53.040633    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:53.090166    7596 cri.go:89] found id: ""
	I1217 02:02:53.090166    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.090166    7596 logs.go:284] No container was found matching "coredns"
	I1217 02:02:53.090166    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:53.095276    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:53.155229    7596 cri.go:89] found id: ""
	I1217 02:02:53.155292    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.155292    7596 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:53.155292    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:53.159579    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:53.201389    7596 cri.go:89] found id: ""
	I1217 02:02:53.201389    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.201389    7596 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:53.201389    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:53.206627    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:53.251727    7596 cri.go:89] found id: ""
	I1217 02:02:53.251807    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.251807    7596 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:53.251807    7596 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:53.255868    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:53.296927    7596 cri.go:89] found id: ""
	I1217 02:02:53.297002    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.297002    7596 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:53.297002    7596 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:53.297002    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:53.362489    7596 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:53.362489    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:53.402379    7596 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:53.402379    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:53.486459    7596 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:53.486459    7596 logs.go:123] Gathering logs for Docker ...
	I1217 02:02:53.486459    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:02:53.519898    7596 logs.go:123] Gathering logs for container status ...
	I1217 02:02:53.519898    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:02:53.571631    7596 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:02:53.571705    7596 out.go:285] * 
	W1217 02:02:53.571763    7596 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.571763    7596 out.go:285] * 
	W1217 02:02:53.573684    7596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:02:53.577599    7596 out.go:203] 
	W1217 02:02:53.580937    7596 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.580937    7596 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:02:53.580937    7596 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:02:53.584112    7596 out.go:203] 
	
	
	==> Docker <==
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638787318Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638875828Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638886629Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638892529Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638897830Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638925533Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638969938Z" level=info msg="Initializing buildkit"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.814190912Z" level=info msg="Completed buildkit initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834145684Z" level=info msg="Daemon has completed initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834353706Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834360607Z" level=info msg="API listen on [::]:2376"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834438816Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Loaded network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:02:55.704439   10979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:55.705374   10979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:55.707916   10979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:55.708859   10979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:55.710947   10979 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.736198] tmpfs: Unknown parameter 'noswap'
	[  +0.306826] CPU: 13 PID: 440898 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7f86f2041b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f86f2041af6.
	[  +0.000001] RSP: 002b:00007ffdf29d7630 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +1.037447] CPU: 4 PID: 441085 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fed1ac73b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fed1ac73af6.
	[  +0.000001] RSP: 002b:00007fff679e5600 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +20.473571] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 02:02:55 up  2:22,  0 user,  load average: 0.75, 2.44, 3.51
	Linux no-preload-184000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:02:52 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:53 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 02:02:53 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:53 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:53 no-preload-184000 kubelet[10727]: E1217 02:02:53.137279   10727 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:53 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:53 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:53 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 02:02:53 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:53 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:53 no-preload-184000 kubelet[10833]: E1217 02:02:53.907256   10833 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:53 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:53 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:54 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 02:02:54 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:54 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:54 no-preload-184000 kubelet[10853]: E1217 02:02:54.623492   10853 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:54 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:54 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:55 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 02:02:55 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:55 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:55 no-preload-184000 kubelet[10883]: E1217 02:02:55.364928   10883 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:55 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:55 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 6 (582.1089ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:02:56.742537    7600 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/FirstStart (540.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (522.78s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
E1217 01:56:52.405896    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:00.619391    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 109 (8m39.7885993s)

                                                
                                                
-- stdout --
	* [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on user configuration
	* Using Docker Desktop driver with root privileges
	* Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:56:50.801354   10580 out.go:360] Setting OutFile to fd 1172 ...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.842347   10580 out.go:374] Setting ErrFile to fd 824...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.868487   10580 out.go:368] Setting JSON to false
	I1217 01:56:50.873633   10580 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8199,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:56:50.873795   10580 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:56:50.877230   10580 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:56:50.879602   10580 notify.go:221] Checking for updates...
	I1217 01:56:50.882592   10580 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:56:50.886357   10580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:56:50.888496   10580 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:56:50.891194   10580 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:56:50.892900   10580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "default-k8s-diff-port-278200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "embed-certs-653800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.898014   10580 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:56:50.898014   10580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:56:51.023603   10580 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:56:51.027600   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.269309   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.250186339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.271302   10580 out.go:179] * Using the docker driver based on user configuration
	I1217 01:56:51.274302   10580 start.go:309] selected driver: docker
	I1217 01:56:51.274302   10580 start.go:927] validating driver "docker" against <nil>
	I1217 01:56:51.274302   10580 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:56:51.315871   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.584149   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.563534441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.584149   10580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:56:51.584149   10580 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:56:51.585155   10580 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:56:51.589148   10580 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:56:51.590146   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:56:51.591150   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:56:51.591150   10580 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:56:51.591150   10580 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:56:51.593150   10580 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 01:56:51.596146   10580 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:56:51.597151   10580 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:56:51.600152   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:51.600152   10580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:56:51.600152   10580 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 01:56:51.600152   10580 cache.go:65] Caching tarball of preloaded images
	I1217 01:56:51.600152   10580 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:56:51.600152   10580 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 01:56:51.601151   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:56:51.601151   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json: {Name:mkf80e0956bcb8fe665f18deea862644aea3658c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:56:51.682130   10580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:56:51.682186   10580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:56:51.682226   10580 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:56:51.682296   10580 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:56:51.682463   10580 start.go:364] duration metric: took 127.8µs to acquireMachinesLock for "newest-cni-383500"
	I1217 01:56:51.682643   10580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:56:51.682643   10580 start.go:125] createHost starting for "" (driver="docker")
	I1217 01:56:51.685685   10580 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:56:51.686059   10580 start.go:159] libmachine.API.Create for "newest-cni-383500" (driver="docker")
	I1217 01:56:51.686127   10580 client.go:173] LocalClient.Create starting
	I1217 01:56:51.686740   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.687153   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.691438   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:56:51.737765   10580 cli_runner.go:211] docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:56:51.740755   10580 network_create.go:284] running [docker network inspect newest-cni-383500] to gather additional debugging logs...
	I1217 01:56:51.740755   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500
	W1217 01:56:51.801443   10580 cli_runner.go:211] docker network inspect newest-cni-383500 returned with exit code 1
	I1217 01:56:51.802437   10580 network_create.go:287] error running [docker network inspect newest-cni-383500]: docker network inspect newest-cni-383500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-383500 not found
	I1217 01:56:51.802437   10580 network_create.go:289] output of [docker network inspect newest-cni-383500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-383500 not found
	
	** /stderr **
	I1217 01:56:51.804999   10580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:56:51.880941   10580 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.896006   10580 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.908781   10580 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000faab70}
	I1217 01:56:51.908781   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1217 01:56:51.911893   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	W1217 01:56:51.964261   10580 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500 returned with exit code 1
	W1217 01:56:51.964261   10580 network_create.go:149] failed to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1217 01:56:51.964261   10580 network_create.go:116] failed to create docker network newest-cni-383500 192.168.67.0/24, will retry: subnet is taken
	I1217 01:56:51.989641   10580 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:52.003768   10580 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f5b5c0}
	I1217 01:56:52.003768   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 01:56:52.007075   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	I1217 01:56:52.149371   10580 network_create.go:108] docker network newest-cni-383500 192.168.76.0/24 created
	I1217 01:56:52.149371   10580 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-383500" container
	I1217 01:56:52.161020   10580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:56:52.221477   10580 cli_runner.go:164] Run: docker volume create newest-cni-383500 --label name.minikube.sigs.k8s.io=newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:56:52.277863   10580 oci.go:103] Successfully created a docker volume newest-cni-383500
	I1217 01:56:52.281622   10580 cli_runner.go:164] Run: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:56:53.597934   10580 cli_runner.go:217] Completed: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3162925s)
	I1217 01:56:53.597934   10580 oci.go:107] Successfully prepared a docker volume newest-cni-383500
	I1217 01:56:53.597934   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:53.597934   10580 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:56:53.602121   10580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	I1217 01:57:10.483352   10580 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (16.8803148s)
	I1217 01:57:10.483443   10580 kic.go:203] duration metric: took 16.8852234s to extract preloaded images to volume ...
	I1217 01:57:10.489300   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:57:10.753192   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:57:10.732557974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:57:10.757222   10580 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1217 01:57:11.047255   10580 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-383500 --name newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-383500 --network newest-cni-383500 --ip 192.168.76.2 --volume newest-cni-383500:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:57:11.789740   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Running}}
	I1217 01:57:11.849518   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:11.908509   10580 cli_runner.go:164] Run: docker exec newest-cni-383500 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:57:12.021676   10580 oci.go:144] the created container "newest-cni-383500" has a running status.
	I1217 01:57:12.021676   10580 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:12.131609   10580 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:57:12.208714   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:12.272788   10580 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:57:12.273496   10580 kic_runner.go:114] Args: [docker exec --privileged newest-cni-383500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:57:12.387830   10580 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:14.496810   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:14.552924   10580 machine.go:94] provisionDockerMachine start ...
	I1217 01:57:14.556597   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.614668   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.628589   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.628589   10580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:57:14.803670   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:14.803752   10580 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 01:57:14.806966   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.872659   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.873288   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.873288   10580 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 01:57:15.070847   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:15.076754   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.138180   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.138558   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.138558   10580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:57:15.322611   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:57:15.322611   10580 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:57:15.322611   10580 ubuntu.go:190] setting up certificates
	I1217 01:57:15.322611   10580 provision.go:84] configureAuth start
	I1217 01:57:15.327543   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:15.379974   10580 provision.go:143] copyHostCerts
	I1217 01:57:15.380366   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:57:15.380414   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:57:15.380832   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:57:15.382184   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:57:15.382226   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:57:15.382581   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:57:15.383683   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:57:15.383736   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:57:15.384159   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:57:15.384159   10580 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 01:57:15.508571   10580 provision.go:177] copyRemoteCerts
	I1217 01:57:15.512616   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:57:15.515422   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.573004   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:15.707286   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:57:15.746639   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:57:15.775638   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:57:15.812045   10580 provision.go:87] duration metric: took 488.4307ms to configureAuth
	I1217 01:57:15.812045   10580 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:57:15.812045   10580 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:57:15.815050   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.867044   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.867044   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.867044   10580 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:57:16.041586   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:57:16.041586   10580 ubuntu.go:71] root file system type: overlay
	I1217 01:57:16.041586   10580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:57:16.045689   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.104012   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.104611   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.104703   10580 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:57:16.297193   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:57:16.300844   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.360905   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.361498   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.361540   10580 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:57:18.042542   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:57:16.287130539 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1217 01:57:18.042542   10580 machine.go:97] duration metric: took 3.4895662s to provisionDockerMachine
	I1217 01:57:18.042542   10580 client.go:176] duration metric: took 26.3559894s to LocalClient.Create
	I1217 01:57:18.042542   10580 start.go:167] duration metric: took 26.3560942s to libmachine.API.Create "newest-cni-383500"
	I1217 01:57:18.042542   10580 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 01:57:18.042542   10580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:57:18.050002   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:57:18.053976   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.112173   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.256941   10580 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:57:18.268729   10580 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:57:18.268729   10580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 01:57:18.269469   10580 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 01:57:18.273808   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:57:18.289831   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 01:57:18.317384   10580 start.go:296] duration metric: took 274.8381ms for postStartSetup
	I1217 01:57:18.322385   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.369389   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:57:18.375387   10580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:57:18.381078   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.432604   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.561382   10580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:57:18.571573   10580 start.go:128] duration metric: took 26.8885332s to createHost
	I1217 01:57:18.571573   10580 start.go:83] releasing machines lock for "newest-cni-383500", held for 26.8886481s
	I1217 01:57:18.575096   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.630669   10580 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 01:57:18.634666   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.635666   10580 ssh_runner.go:195] Run: cat /version.json
	I1217 01:57:18.639677   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 01:57:18.859792   10580 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 01:57:18.877228   10580 ssh_runner.go:195] Run: systemctl --version
	I1217 01:57:18.892439   10580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:57:18.900947   10580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:57:18.905555   10580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:57:18.954841   10580 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:57:18.954952   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:18.955015   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:18.955015   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:18.991199   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1217 01:57:19.008171   10580 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 01:57:19.008230   10580 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 01:57:19.013119   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:57:19.028717   10580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:57:19.032858   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:57:19.052914   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.072904   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:57:19.095550   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.115854   10580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:57:19.132848   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:57:19.151846   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:57:19.172853   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:57:19.193907   10580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:57:19.210892   10580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:57:19.227892   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:19.399536   10580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:57:19.601453   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:19.601453   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:19.605450   10580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:57:19.629461   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.656299   10580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:57:19.736745   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.764285   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:57:19.789001   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:19.815453   10580 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:57:19.827238   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:57:19.842026   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 01:57:19.874597   10580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:57:20.041348   10580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:57:20.226962   10580 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:57:20.226962   10580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:57:20.254551   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:57:20.278555   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:20.468211   10580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:57:21.513591   10580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0453647s)
	I1217 01:57:21.520768   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:57:21.544117   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:57:21.578618   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:21.602252   10580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:57:21.754251   10580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:57:21.925790   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.049631   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:57:22.080439   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:57:22.102178   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.247555   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:57:22.356045   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:22.374818   10580 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:57:22.380720   10580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:57:22.388747   10580 start.go:564] Will wait 60s for crictl version
	I1217 01:57:22.393402   10580 ssh_runner.go:195] Run: which crictl
	I1217 01:57:22.405105   10580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:57:22.456110   10580 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:57:22.460422   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.517812   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.562431   10580 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 01:57:22.566477   10580 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 01:57:22.701109   10580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:57:22.707802   10580 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:57:22.717558   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:22.737642   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:22.798183   10580 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 01:57:22.800238   10580 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:57:22.800267   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:57:22.804334   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.840199   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.840199   10580 docker.go:621] Images already preloaded, skipping extraction
	I1217 01:57:22.843860   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.875886   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.875953   10580 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:57:22.876007   10580 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 01:57:22.876138   10580 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:57:22.881452   10580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:57:22.963596   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:57:22.963596   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:57:22.963596   10580 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:57:22.963596   10580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:57:22.964766   10580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:57:22.971170   10580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:57:22.988148   10580 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:57:22.993571   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:57:23.008239   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 01:57:23.168781   10580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:57:23.268253   10580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 01:57:23.292920   10580 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:57:23.298948   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:23.555705   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:23.774461   10580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:57:23.797469   10580 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 01:57:23.797574   10580 certs.go:195] generating shared ca certs ...
	I1217 01:57:23.797612   10580 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:57:23.801985   10580 certs.go:257] generating profile certs ...
	I1217 01:57:23.801985   10580 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 01:57:23.802608   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt with IP's: []
	I1217 01:57:23.893499   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt ...
	I1217 01:57:23.893499   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt: {Name:mk018179fa6276f140d3c484dc77b112ade6a239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.894491   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key ...
	I1217 01:57:23.894491   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key: {Name:mkf03a928d0759f4e80338ae1a94ef05274842bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.895493   10580 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 01:57:23.895493   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 01:57:23.940939   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 ...
	I1217 01:57:23.940939   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8: {Name:mk793887fd39b61b0148eb1aef73edce147dd7af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 ...
	I1217 01:57:23.941938   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8: {Name:mk75e8d1cb53d5e553bcfb51860f15346eec2f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt
	I1217 01:57:23.956750   10580 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key
	I1217 01:57:23.958193   10580 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 01:57:23.958415   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt with IP's: []
	I1217 01:57:24.067269   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt ...
	I1217 01:57:24.067269   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt: {Name:mk21db782682ec857bcf614d6ee83e5820624361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.068316   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key ...
	I1217 01:57:24.068316   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key: {Name:mk4bcb88a5770958ea52d64f6df1b6838f8b5fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.097118   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:57:24.097649   10580 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:57:24.097791   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:57:24.098812   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:57:24.100115   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:57:24.135459   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:57:24.165011   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:57:24.192410   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:57:24.481059   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:57:25.003692   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:57:25.038428   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:57:25.065081   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:57:25.099226   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:57:25.144094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:57:25.174094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:57:25.210940   10580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:57:25.237951   10580 ssh_runner.go:195] Run: openssl version
	I1217 01:57:25.254946   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.276935   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:57:25.294948   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.302943   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.306934   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.370952   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.390944   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.415186   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.434956   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:57:25.453960   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.460961   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.464957   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.515968   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:57:25.532957   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:57:25.547952   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.565954   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:57:25.583961   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.591966   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.596965   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.654221   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:57:25.671221   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 01:57:25.688222   10580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:57:25.696236   10580 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:57:25.696236   10580 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:57:25.699225   10580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:57:25.732231   10580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:57:25.750219   10580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:57:25.764216   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:25.768221   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:25.782223   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:25.782223   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:25.787226   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:25.811226   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:25.817308   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:25.846154   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:25.861155   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:25.865166   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:25.882164   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.894161   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:25.898177   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.916173   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:25.936694   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:25.940687   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:25.956687   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:26.100043   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:57:26.198370   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:57:26.302677   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:27.963444   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:27.963444   10580 kubeadm.go:319] 
	I1217 02:01:27.963616   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:27.972023   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:01:27.973054   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:01:27.973281   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:01:27.973281   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:01:27.973879   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:01:27.975176   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:01:27.975817   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] OS: Linux
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:01:27.976495   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:01:27.977232   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:01:27.977413   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:01:27.979976   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 02:01:27.981175   10580 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 02:01:27.981278   10580 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.982128   10580 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 02:01:27.982285   10580 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 02:01:27.982463   10580 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 02:01:27.982622   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:01:27.983316   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:01:27.983431   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:01:27.985605   10580 out.go:252]   - Booting up control plane ...
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:01:27.986216   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:01:27.986315   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:27.987339   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000575784s
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987339   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987913   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 
	W1217 02:01:27.987913   10580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000575784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000575784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 02:01:27.992425   10580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 02:01:28.454931   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:01:28.474574   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:01:28.479997   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:01:28.494933   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:01:28.494933   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 02:01:28.501352   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:01:28.516227   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:01:28.521874   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:01:28.540752   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:01:28.554535   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:01:28.559019   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:01:28.577479   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.592775   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:01:28.596757   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.614687   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:01:28.629343   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:01:28.633759   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:01:28.653776   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:01:28.777097   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 02:01:28.860083   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:28.960806   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:05:29.785276   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:05:29.785276   10580 kubeadm.go:319] 
	I1217 02:05:29.785276   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:05:29.791358   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:05:29.791358   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:05:29.791358   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:05:29.791358   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:05:29.791885   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:05:29.791966   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:05:29.792106   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:05:29.792212   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:05:29.792322   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:05:29.792428   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:05:29.792578   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:05:29.792647   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:05:29.792742   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:05:29.792840   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:05:29.792946   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:05:29.793101   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:05:29.793715   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:05:29.793854   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:05:29.793953   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:05:29.794112   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:05:29.794256   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:05:29.794355   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:05:29.794459   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:05:29.794742   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:05:29.794802   10580 kubeadm.go:319] OS: Linux
	I1217 02:05:29.794969   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:05:29.795102   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:05:29.795785   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:05:29.795959   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:05:29.796062   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:05:29.796062   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:05:29.796062   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:05:29.798726   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:05:29.798726   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:05:29.798726   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:05:29.799345   10580 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:05:29.799533   10580 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:05:29.799703   10580 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:05:29.799861   10580 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:05:29.800020   10580 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:05:29.800151   10580 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:05:29.800313   10580 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:05:29.800441   10580 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:05:29.800526   10580 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:05:29.800681   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:05:29.800781   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:05:29.801499   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:05:29.804029   10580 out.go:252]   - Booting up control plane ...
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:05:29.805159   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:05:29.805683   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001314016s
	I1217 02:05:29.805683   10580 kubeadm.go:319] 
	I1217 02:05:29.805683   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:05:29.805778   10580 kubeadm.go:319] 
	I1217 02:05:29.805778   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:05:29.806377   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:05:29.806377   10580 kubeadm.go:319] 
	I1217 02:05:29.806377   10580 kubeadm.go:403] duration metric: took 8m4.1029248s to StartCluster
	I1217 02:05:29.806377   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:05:29.810341   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:05:29.871764   10580 cri.go:89] found id: ""
	I1217 02:05:29.871764   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.871764   10580 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:05:29.871764   10580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:05:29.876168   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:05:29.927013   10580 cri.go:89] found id: ""
	I1217 02:05:29.927013   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.927013   10580 logs.go:284] No container was found matching "etcd"
	I1217 02:05:29.927013   10580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:05:29.931518   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:05:29.980022   10580 cri.go:89] found id: ""
	I1217 02:05:29.980022   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.980022   10580 logs.go:284] No container was found matching "coredns"
	I1217 02:05:29.980022   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:05:29.984478   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:05:30.032552   10580 cri.go:89] found id: ""
	I1217 02:05:30.032552   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.032552   10580 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:05:30.032552   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:05:30.037694   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:05:30.082177   10580 cri.go:89] found id: ""
	I1217 02:05:30.082177   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.082177   10580 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:05:30.082177   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:05:30.087245   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:05:30.130585   10580 cri.go:89] found id: ""
	I1217 02:05:30.130585   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.130585   10580 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:05:30.130585   10580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:05:30.137646   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:05:30.177235   10580 cri.go:89] found id: ""
	I1217 02:05:30.177235   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.177235   10580 logs.go:284] No container was found matching "kindnet"
	I1217 02:05:30.177235   10580 logs.go:123] Gathering logs for container status ...
	I1217 02:05:30.177235   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:05:30.227457   10580 logs.go:123] Gathering logs for kubelet ...
	I1217 02:05:30.227457   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:05:30.291457   10580 logs.go:123] Gathering logs for dmesg ...
	I1217 02:05:30.291457   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:05:30.331904   10580 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:05:30.331904   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:05:30.416101   10580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:05:30.405239   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.406412   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.407374   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.408863   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.410358   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:05:30.405239   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.406412   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.407374   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.408863   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.410358   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:05:30.416101   10580 logs.go:123] Gathering logs for Docker ...
	I1217 02:05:30.416101   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:05:30.444965   10580 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:05:30.445965   10580 out.go:285] * 
	* 
	W1217 02:05:30.445965   10580 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:05:30.445965   10580 out.go:285] * 
	* 
	W1217 02:05:30.447753   10580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:05:30.453258   10580 out.go:203] 
	W1217 02:05:30.456588   10580 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:05:30.457182   10580 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:05:30.457182   10580 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:05:30.459905   10580 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:186: failed starting minikube -first start-. args "out/minikube-windows-amd64.exe start -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 109
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-383500
helpers_test.go:244: (dbg) docker inspect newest-cni-383500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638",
	        "Created": "2025-12-17T01:57:11.100405677Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 433106,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:57:11.454843914Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hostname",
	        "HostsPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hosts",
	        "LogPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638-json.log",
	        "Name": "/newest-cni-383500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-383500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-383500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-383500",
	                "Source": "/var/lib/docker/volumes/newest-cni-383500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-383500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-383500",
	                "name.minikube.sigs.k8s.io": "newest-cni-383500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6caee67017413de1f9dc483ad9459600dcb6111052c799eaefbc16f4be8d0125",
	            "SandboxKey": "/var/run/docker/netns/6caee6701741",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63415"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63418"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63419"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-383500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0a3f566cb0e1e68eaf85fc99a3ee131940651a4c9a15e291bc077be33f07b4e",
	                    "EndpointID": "2d14072f1129746f62b2ed0cbaec8f7f3430521dededc919044dc0c745590f04",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-383500",
	                        "58edac260513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500: exit status 6 (563.0464ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:05:31.370748    8520 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/FirstStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/FirstStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25: (1.1889874s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/FirstStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ stop    │ -p default-k8s-diff-port-278200 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-278200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:05:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:05:02.629645    6768 out.go:360] Setting OutFile to fd 852 ...
	I1217 02:05:02.671051    6768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:02.671051    6768 out.go:374] Setting ErrFile to fd 1172...
	I1217 02:05:02.671051    6768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:02.687471    6768 out.go:368] Setting JSON to false
	I1217 02:05:02.690746    6768 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8691,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:05:02.690781    6768 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:05:02.694017    6768 out.go:179] * [no-preload-184000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:05:02.699245    6768 notify.go:221] Checking for updates...
	I1217 02:05:02.701769    6768 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:02.703938    6768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:02.706929    6768 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:05:02.709501    6768 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:02.712185    6768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:02.715207    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:02.716501    6768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:02.837461    6768 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:05:02.842258    6768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:03.079348    6768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:05:03.054281062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:05:03.087094    6768 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:03.091220    6768 start.go:309] selected driver: docker
	I1217 02:05:03.091220    6768 start.go:927] validating driver "docker" against &{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:03.091220    6768 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:03.188409    6768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:03.434313    6768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:05:03.415494177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:05:03.434313    6768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:05:03.434313    6768 cni.go:84] Creating CNI manager for ""
	I1217 02:05:03.434313    6768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:05:03.434313    6768 start.go:353] cluster config:
	{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:03.439310    6768 out.go:179] * Starting "no-preload-184000" primary control-plane node in "no-preload-184000" cluster
	I1217 02:05:03.441310    6768 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:05:03.443310    6768 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:03.448311    6768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:03.448311    6768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:05:03.448311    6768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1217 02:05:03.545905    6768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:03.545905    6768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:03.545905    6768 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:03.545905    6768 start.go:360] acquireMachinesLock for no-preload-184000: {Name:mk58fd592c3ebf84a2801325b861ffe90e12015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:03.545905    6768 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-184000"
	I1217 02:05:03.546921    6768 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:03.546921    6768 fix.go:54] fixHost starting: 
	I1217 02:05:03.557903    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:03.760117    6768 fix.go:112] recreateIfNeeded on no-preload-184000: state=Stopped err=<nil>
	W1217 02:05:03.760117    6768 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:05:03.764113    6768 out.go:252] * Restarting existing docker container for "no-preload-184000" ...
	I1217 02:05:03.767110    6768 cli_runner.go:164] Run: docker start no-preload-184000
	I1217 02:05:05.253549    6768 cli_runner.go:217] Completed: docker start no-preload-184000: (1.4864164s)
	I1217 02:05:05.260543    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:05.357919    6768 kic.go:430] container "no-preload-184000" state is running.
	I1217 02:05:05.364922    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:05.444478    6768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 02:05:05.447474    6768 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:05.453480    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:05.545241    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:05.545241    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:05.545241    6768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:05.549583    6768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:06.370661    6768 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.370661    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1217 02:05:06.371228    6768 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.9228733s
	I1217 02:05:06.371228    6768 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1217 02:05:06.375872    6768 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.375872    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1217 02:05:06.376401    6768 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.9275166s
	I1217 02:05:06.376463    6768 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 02:05:06.376989    6768 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.377073    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1217 02:05:06.377073    6768 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9287184s
	I1217 02:05:06.377073    6768 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1217 02:05:06.397758    6768 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.397758    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1217 02:05:06.397758    6768 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9494026s
	I1217 02:05:06.397758    6768 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1217 02:05:06.401745    6768 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.401745    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1217 02:05:06.401745    6768 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9533893s
	I1217 02:05:06.401745    6768 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 02:05:06.434118    6768 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.434118    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1217 02:05:06.434118    6768 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.9857618s
	I1217 02:05:06.436060    6768 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1217 02:05:06.469702    6768 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.470703    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1217 02:05:06.470703    6768 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.022347s
	I1217 02:05:06.470703    6768 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1217 02:05:06.521227    6768 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.521321    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1217 02:05:06.521321    6768 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.0729641s
	I1217 02:05:06.521321    6768 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 02:05:06.521321    6768 cache.go:87] Successfully saved all images to host disk.
	I1217 02:05:08.728111    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 02:05:08.728111    6768 ubuntu.go:182] provisioning hostname "no-preload-184000"
	I1217 02:05:08.732574    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:08.788471    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:08.788517    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:08.788517    6768 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-184000 && echo "no-preload-184000" | sudo tee /etc/hostname
	I1217 02:05:08.984320    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 02:05:08.988540    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.045241    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:09.046042    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:09.046073    6768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184000/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:09.239223    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:09.239223    6768 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:05:09.239223    6768 ubuntu.go:190] setting up certificates
	I1217 02:05:09.239223    6768 provision.go:84] configureAuth start
	I1217 02:05:09.242936    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:09.300521    6768 provision.go:143] copyHostCerts
	I1217 02:05:09.300924    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:05:09.300924    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:05:09.301449    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:05:09.301878    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:05:09.301878    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:05:09.302546    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:05:09.303134    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:05:09.303134    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:05:09.303134    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:05:09.303843    6768 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-184000 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-184000]
	I1217 02:05:09.513127    6768 provision.go:177] copyRemoteCerts
	I1217 02:05:09.517075    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:09.519665    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.573516    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:09.696089    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:05:09.723663    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:09.749598    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:09.779713    6768 provision.go:87] duration metric: took 540.4619ms to configureAuth
	I1217 02:05:09.779730    6768 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:09.779917    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:09.784013    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.841680    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:09.841680    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:09.841680    6768 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:05:10.010881    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:05:10.010926    6768 ubuntu.go:71] root file system type: overlay
	I1217 02:05:10.011054    6768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:05:10.014899    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.071419    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:10.071649    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:10.071649    6768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:05:10.253657    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:05:10.257912    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.314224    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:10.314288    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:10.314288    6768 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:05:10.496294    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:10.496294    6768 machine.go:97] duration metric: took 5.0487445s to provisionDockerMachine
	I1217 02:05:10.496294    6768 start.go:293] postStartSetup for "no-preload-184000" (driver="docker")
	I1217 02:05:10.496294    6768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:10.501160    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:10.504159    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.558430    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:10.698125    6768 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:10.706351    6768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:10.706403    6768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:10.706403    6768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:05:10.706403    6768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:05:10.707067    6768 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:05:10.711519    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:10.725151    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:05:10.754903    6768 start.go:296] duration metric: took 258.6046ms for postStartSetup
	I1217 02:05:10.759061    6768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:10.762269    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.816597    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:10.943522    6768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:10.958658    6768 fix.go:56] duration metric: took 7.411626s for fixHost
	I1217 02:05:10.958658    6768 start.go:83] releasing machines lock for "no-preload-184000", held for 7.4126419s
	I1217 02:05:10.962906    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:11.017406    6768 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:05:11.021445    6768 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:11.021510    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:11.024650    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:11.076963    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:11.082042    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	W1217 02:05:11.198310    6768 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:05:11.210947    6768 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:11.226813    6768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:11.235667    6768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:11.242573    6768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:11.255007    6768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:11.255007    6768 start.go:496] detecting cgroup driver to use...
	I1217 02:05:11.255007    6768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:11.256009    6768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:11.283766    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:11.303122    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:11.317795    6768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:11.321726    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:11.340924    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 02:05:11.357913    6768 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:05:11.357979    6768 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:05:11.359375    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:11.377107    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:11.395476    6768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:11.418432    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:11.437643    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:11.458621    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:11.477313    6768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:11.495090    6768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:11.513809    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:11.664976    6768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:11.829322    6768 start.go:496] detecting cgroup driver to use...
	I1217 02:05:11.829433    6768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:11.835895    6768 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:05:11.860815    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:05:11.883615    6768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:05:11.960567    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:05:11.983346    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:12.002889    6768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:12.032515    6768 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:05:12.044249    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:05:12.056817    6768 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:05:12.080834    6768 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:05:12.249437    6768 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:05:12.397968    6768 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:05:12.397968    6768 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:05:12.425594    6768 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:05:12.447409    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:12.604225    6768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:05:13.440560    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:13.466105    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:05:13.489994    6768 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:05:13.514704    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:05:13.536605    6768 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:05:13.693215    6768 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:05:13.846670    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:14.004258    6768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 02:05:14.030193    6768 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:05:14.055627    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:14.209153    6768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:05:14.322039    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:05:14.339530    6768 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:05:14.345129    6768 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:05:14.353653    6768 start.go:564] Will wait 60s for crictl version
	I1217 02:05:14.357665    6768 ssh_runner.go:195] Run: which crictl
	I1217 02:05:14.368483    6768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:14.413189    6768 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:05:14.417273    6768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:05:14.462617    6768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:05:14.502904    6768 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:05:14.506033    6768 cli_runner.go:164] Run: docker exec -t no-preload-184000 dig +short host.docker.internal
	I1217 02:05:14.646991    6768 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:05:14.651689    6768 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:05:14.659909    6768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:14.680414    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:14.733079    6768 kubeadm.go:884] updating cluster {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:14.734079    6768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:05:14.737079    6768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:05:14.767963    6768 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:05:14.767963    6768 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:14.767963    6768 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:05:14.768480    6768 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:14.771542    6768 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:05:14.846616    6768 cni.go:84] Creating CNI manager for ""
	I1217 02:05:14.846636    6768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:05:14.846636    6768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:05:14.846636    6768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184000 NodeName:no-preload-184000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:14.846636    6768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-184000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:14.851632    6768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:14.863585    6768 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:14.868130    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:14.879683    6768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:05:14.899726    6768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:14.919991    6768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 02:05:14.944949    6768 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:14.952431    6768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:14.972008    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:15.116248    6768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:15.140002    6768 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000 for IP: 192.168.94.2
	I1217 02:05:15.140002    6768 certs.go:195] generating shared ca certs ...
	I1217 02:05:15.140002    6768 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:15.140318    6768 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:05:15.140318    6768 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:05:15.140951    6768 certs.go:257] generating profile certs ...
	I1217 02:05:15.141475    6768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.key
	I1217 02:05:15.141776    6768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key.d162c569
	I1217 02:05:15.141823    6768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key
	I1217 02:05:15.142712    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:05:15.142929    6768 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:15.142993    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:05:15.143196    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:05:15.143459    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:05:15.143743    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:05:15.144134    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:05:15.145445    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:15.174639    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:05:15.206543    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:15.237390    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:05:15.269725    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:15.299081    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:15.331970    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:15.364258    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:05:15.394880    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:05:15.424665    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:15.454305    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:05:15.482694    6768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:15.505956    6768 ssh_runner.go:195] Run: openssl version
	I1217 02:05:15.520857    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.538884    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:05:15.556769    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.565231    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.569694    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.618090    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:15.636651    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.657687    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:15.678656    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.686438    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.690381    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.738620    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:15.756906    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.776662    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:05:15.794117    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.801453    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.805697    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.853109    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:15.871938    6768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:15.885136    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:15.931869    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:15.978751    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:16.028376    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:16.079257    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:16.133289    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:16.177187    6768 kubeadm.go:401] StartCluster: {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:16.181577    6768 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:05:16.216215    6768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:16.228229    6768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:16.228229    6768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:16.233407    6768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:16.246099    6768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:16.251775    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.304124    6768 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:16.305294    6768 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-184000" cluster setting kubeconfig missing "no-preload-184000" context setting]
	I1217 02:05:16.305850    6768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.326797    6768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:16.342507    6768 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:05:16.342507    6768 kubeadm.go:602] duration metric: took 114.2766ms to restartPrimaryControlPlane
	I1217 02:05:16.342507    6768 kubeadm.go:403] duration metric: took 165.3768ms to StartCluster
	I1217 02:05:16.342507    6768 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.342507    6768 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:16.343620    6768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.344231    6768 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:05:16.344231    6768 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:16.344231    6768 addons.go:70] Setting storage-provisioner=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:239] Setting addon storage-provisioner=true in "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:70] Setting dashboard=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:16.344231    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.344231    6768 addons.go:70] Setting default-storageclass=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:239] Setting addon dashboard=true in "no-preload-184000"
	W1217 02:05:16.344929    6768 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:16.344929    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.347844    6768 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:16.354044    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.354121    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.355814    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:16.357052    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.409696    6768 addons.go:239] Setting addon default-storageclass=true in "no-preload-184000"
	I1217 02:05:16.409696    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.410688    6768 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:16.412689    6768 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:16.412689    6768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:16.416693    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.417698    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.423696    6768 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:16.425691    6768 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:05:16.428703    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:16.428703    6768 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:16.431694    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.467691    6768 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:16.468689    6768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:16.469695    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.471696    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.482691    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.518691    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.521691    6768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:16.604232    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:16.609620    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:16.609620    6768 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:16.632701    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:16.632701    6768 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:16.648900    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:16.655841    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:16.655841    6768 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:16.700825    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:16.700825    6768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:16.727124    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:16.728137    6768 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:16.747122    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:16.747167    6768 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:16.768592    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:16.768592    6768 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:05:16.800138    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.800273    6768 retry.go:31] will retry after 331.277361ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.806289    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	W1217 02:05:16.807169    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.807169    6768 retry.go:31] will retry after 367.14462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.821991    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:16.821991    6768 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:16.842976    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:16.842976    6768 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:16.864982    6768 node_ready.go:35] waiting up to 6m0s for node "no-preload-184000" to be "Ready" ...
	I1217 02:05:16.867979    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:16.963061    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.963061    6768 retry.go:31] will retry after 179.721934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.138499    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:17.147072    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:17.178163    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:17.232301    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.232367    6768 retry.go:31] will retry after 261.645604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.232463    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.232532    6768 retry.go:31] will retry after 358.922489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.264584    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.264642    6768 retry.go:31] will retry after 293.195494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.499020    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:17.564644    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:17.598253    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:17.609802    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.609802    6768 retry.go:31] will retry after 356.11648ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.728986    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.728986    6768 retry.go:31] will retry after 414.908289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.728986    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.728986    6768 retry.go:31] will retry after 471.765196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.972892    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:18.048428    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.048428    6768 retry.go:31] will retry after 848.614748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.149277    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:18.205928    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:18.270282    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.270282    6768 retry.go:31] will retry after 717.444443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:18.309651    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.309651    6768 retry.go:31] will retry after 981.836066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.901981    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:18.981321    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.981857    6768 retry.go:31] will retry after 1.188790069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.992863    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:19.074677    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.074677    6768 retry.go:31] will retry after 947.510236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.297489    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:19.377867    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.377937    6768 retry.go:31] will retry after 1.104512362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.028161    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:20.102126    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.102126    6768 retry.go:31] will retry after 2.018338834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.175978    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:20.253210    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.253210    6768 retry.go:31] will retry after 2.536835686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.487984    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:20.611020    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.611556    6768 retry.go:31] will retry after 1.621989786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.126652    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.202802    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.202802    6768 retry.go:31] will retry after 2.213473046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.239657    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.319492    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.319565    6768 retry.go:31] will retry after 2.644500815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.794504    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:22.901867    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.901867    6768 retry.go:31] will retry after 2.159892203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.422186    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.505078    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.505078    6768 retry.go:31] will retry after 5.38992916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.969459    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.066905    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.098830    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.098830    6768 retry.go:31] will retry after 2.819506289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:25.172740    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.172777    6768 retry.go:31] will retry after 5.817482434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:26.902270    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:29.785276   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:05:29.785276   10580 kubeadm.go:319] 
	I1217 02:05:29.785276   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:05:29.791358   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:05:29.791358   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:05:29.791358   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:05:29.791358   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:05:29.791885   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:05:29.791966   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:05:29.792106   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:05:29.792212   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:05:29.792322   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:05:29.792428   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:05:29.792578   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:05:29.792647   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:05:29.792742   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:05:29.792840   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:05:29.792946   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:05:29.793101   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:05:29.793715   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:05:29.793854   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:05:29.793953   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:05:29.794112   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:05:29.794256   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:05:29.794355   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:05:29.794459   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:05:29.794742   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:05:29.794802   10580 kubeadm.go:319] OS: Linux
	I1217 02:05:29.794969   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:05:29.795102   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:05:29.795785   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:05:29.795959   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:05:29.796062   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:05:29.796062   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:05:29.796062   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:05:29.798726   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:05:29.798726   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:05:29.798726   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:05:29.799345   10580 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:05:29.799533   10580 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:05:29.799703   10580 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:05:29.799861   10580 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:05:29.800020   10580 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:05:29.800151   10580 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:05:29.800313   10580 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:05:29.800441   10580 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:05:29.800526   10580 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:05:29.800681   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:05:29.800781   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:05:29.801499   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:05:29.804029   10580 out.go:252]   - Booting up control plane ...
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:05:29.805159   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:05:29.805683   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001314016s
	I1217 02:05:29.805683   10580 kubeadm.go:319] 
	I1217 02:05:29.805683   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:05:29.805778   10580 kubeadm.go:319] 
	I1217 02:05:29.805778   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:05:29.806377   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:05:29.806377   10580 kubeadm.go:319] 
	I1217 02:05:29.806377   10580 kubeadm.go:403] duration metric: took 8m4.1029248s to StartCluster
	I1217 02:05:29.806377   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:05:29.810341   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:05:29.871764   10580 cri.go:89] found id: ""
	I1217 02:05:29.871764   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.871764   10580 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:05:29.871764   10580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:05:29.876168   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:05:29.927013   10580 cri.go:89] found id: ""
	I1217 02:05:29.927013   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.927013   10580 logs.go:284] No container was found matching "etcd"
	I1217 02:05:29.927013   10580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:05:29.931518   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:05:29.980022   10580 cri.go:89] found id: ""
	I1217 02:05:29.980022   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.980022   10580 logs.go:284] No container was found matching "coredns"
	I1217 02:05:29.980022   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:05:29.984478   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:05:30.032552   10580 cri.go:89] found id: ""
	I1217 02:05:30.032552   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.032552   10580 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:05:30.032552   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:05:30.037694   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:05:30.082177   10580 cri.go:89] found id: ""
	I1217 02:05:30.082177   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.082177   10580 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:05:30.082177   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:05:30.087245   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:05:30.130585   10580 cri.go:89] found id: ""
	I1217 02:05:30.130585   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.130585   10580 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:05:30.130585   10580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:05:30.137646   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:05:30.177235   10580 cri.go:89] found id: ""
	I1217 02:05:30.177235   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.177235   10580 logs.go:284] No container was found matching "kindnet"
	I1217 02:05:30.177235   10580 logs.go:123] Gathering logs for container status ...
	I1217 02:05:30.177235   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:05:30.227457   10580 logs.go:123] Gathering logs for kubelet ...
	I1217 02:05:30.227457   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:05:30.291457   10580 logs.go:123] Gathering logs for dmesg ...
	I1217 02:05:30.291457   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:05:30.331904   10580 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:05:30.331904   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:05:30.416101   10580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:05:30.405239   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.406412   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.407374   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.408863   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.410358   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:05:30.405239   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.406412   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.407374   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.408863   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.410358   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:05:30.416101   10580 logs.go:123] Gathering logs for Docker ...
	I1217 02:05:30.416101   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:05:30.444965   10580 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:05:30.445965   10580 out.go:285] * 
	W1217 02:05:30.445965   10580 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:05:30.445965   10580 out.go:285] * 
	W1217 02:05:30.447753   10580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:05:30.453258   10580 out.go:203] 
	W1217 02:05:30.456588   10580 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:05:30.457182   10580 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:05:30.457182   10580 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:05:30.459905   10580 out.go:203] 
	
	
	==> Docker <==
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386670361Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386753370Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386763871Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386768771Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386775572Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386796774Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386828078Z" level=info msg="Initializing buildkit"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.501948357Z" level=info msg="Completed buildkit initialization"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511624614Z" level=info msg="Daemon has completed initialization"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511803733Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511841238Z" level=info msg="API listen on [::]:2376"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511803133Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 01:57:21 newest-cni-383500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 01:57:22 newest-cni-383500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Loaded network plugin cni"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 01:57:22 newest-cni-383500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:05:32.422452   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:32.424569   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:32.425982   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:32.427013   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:32.428280   10612 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.854693] CPU: 4 PID: 461784 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fc56db92b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fc56db92af6.
	[  +0.000001] RSP: 002b:00007fffd59b2fe0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000000] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.890415] CPU: 4 PID: 461911 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7ff5a808ab20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7ff5a808aaf6.
	[  +0.000001] RSP: 002b:00007ffc5c7667f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:05:32 up  2:24,  0 user,  load average: 0.81, 1.75, 3.08
	Linux newest-cni-383500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:05:29 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:29 newest-cni-383500 kubelet[10339]: E1217 02:05:29.617466   10339 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:29 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:29 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:30 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 320.
	Dec 17 02:05:30 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:30 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:30 newest-cni-383500 kubelet[10448]: E1217 02:05:30.352610   10448 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:30 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:30 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 321.
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:31 newest-cni-383500 kubelet[10478]: E1217 02:05:31.101002   10478 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 322.
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:31 newest-cni-383500 kubelet[10504]: E1217 02:05:31.875439   10504 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:05:31 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:05:32 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 323.
	Dec 17 02:05:32 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:05:32 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500: exit status 6 (605.6816ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:05:33.442451    6060 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-383500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/FirstStart (522.78s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (5.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-184000 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) Non-zero exit: kubectl --context no-preload-184000 create -f testdata\busybox.yaml: exit status 1 (100.657ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-184000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:194: kubectl --context no-preload-184000 create -f testdata\busybox.yaml failed: exit status 1
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
E1217 02:02:56.895973    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-044000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-184000
helpers_test.go:244: (dbg) docker inspect no-preload-184000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed",
	        "Created": "2025-12-17T01:54:01.802457191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 400896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:54:02.102156548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hosts",
	        "LogPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed-json.log",
	        "Name": "/no-preload-184000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-184000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-184000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-184000",
	                "Source": "/var/lib/docker/volumes/no-preload-184000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-184000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-184000",
	                "name.minikube.sigs.k8s.io": "no-preload-184000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "878415a4285bb4e9322b366762510a9c3489066b0ef84b5d48358f5f81e082bf",
	            "SandboxKey": "/var/run/docker/netns/878415a4285b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62904"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62905"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62907"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62908"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-184000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6adb91d102dfa92bfa154127e93e39401be06a5d21df5043f3e85e012e93e321",
	                    "EndpointID": "8e3f71a707f374d60db9e819d8097a078527854d326de7a03065e5d1fcc8c8bd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-184000",
	                        "335cbfb80690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 6 (563.9705ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:02:57.481033    8168 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25: (1.1430916s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p old-k8s-version-044000 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0        │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-653800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ stop    │ -p embed-certs-653800 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-653800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ stop    │ -p default-k8s-diff-port-278200 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-278200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:56:50
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:56:50.801354   10580 out.go:360] Setting OutFile to fd 1172 ...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.842347   10580 out.go:374] Setting ErrFile to fd 824...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.868487   10580 out.go:368] Setting JSON to false
	I1217 01:56:50.873633   10580 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8199,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:56:50.873795   10580 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:56:50.877230   10580 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:56:50.879602   10580 notify.go:221] Checking for updates...
	I1217 01:56:50.882592   10580 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:56:50.886357   10580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:56:50.888496   10580 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:56:50.891194   10580 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:56:50.892900   10580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "default-k8s-diff-port-278200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "embed-certs-653800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.898014   10580 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:56:50.898014   10580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:56:51.023603   10580 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:56:51.027600   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.269309   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.250186339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.271302   10580 out.go:179] * Using the docker driver based on user configuration
	I1217 01:56:51.274302   10580 start.go:309] selected driver: docker
	I1217 01:56:51.274302   10580 start.go:927] validating driver "docker" against <nil>
	I1217 01:56:51.274302   10580 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:56:51.315871   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.584149   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.563534441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.584149   10580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:56:51.584149   10580 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:56:51.585155   10580 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:56:51.589148   10580 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:56:51.590146   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:56:51.591150   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:56:51.591150   10580 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:56:51.591150   10580 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:56:51.593150   10580 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 01:56:51.596146   10580 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:56:51.597151   10580 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:56:51.600152   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:51.600152   10580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:56:51.600152   10580 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 01:56:51.600152   10580 cache.go:65] Caching tarball of preloaded images
	I1217 01:56:51.600152   10580 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:56:51.600152   10580 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 01:56:51.601151   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:56:51.601151   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json: {Name:mkf80e0956bcb8fe665f18deea862644aea3658c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:56:51.682130   10580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:56:51.682186   10580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:56:51.682226   10580 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:56:51.682296   10580 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:56:51.682463   10580 start.go:364] duration metric: took 127.8µs to acquireMachinesLock for "newest-cni-383500"
	I1217 01:56:51.682643   10580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:56:51.682643   10580 start.go:125] createHost starting for "" (driver="docker")
	W1217 01:56:50.658968   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:53.155347   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:56:50.357392    6652 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63284/healthz ...
	I1217 01:56:50.369628    6652 api_server.go:279] https://127.0.0.1:63284/healthz returned 200:
	ok
	I1217 01:56:50.373212    6652 api_server.go:141] control plane version: v1.34.2
	I1217 01:56:50.373212    6652 api_server.go:131] duration metric: took 1.5164341s to wait for apiserver health ...
	I1217 01:56:50.373212    6652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:56:50.383881    6652 system_pods.go:59] 8 kube-system pods found
	I1217 01:56:50.383935    6652 system_pods.go:61] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.383972    6652 system_pods.go:61] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.383972    6652 system_pods.go:61] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.383972    6652 system_pods.go:61] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.383972    6652 system_pods.go:74] duration metric: took 10.76ms to wait for pod list to return data ...
	I1217 01:56:50.383972    6652 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:56:50.472293    6652 default_sa.go:45] found service account: "default"
	I1217 01:56:50.472293    6652 default_sa.go:55] duration metric: took 88.3195ms for default service account to be created ...
	I1217 01:56:50.472293    6652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:56:50.550966    6652 system_pods.go:86] 8 kube-system pods found
	I1217 01:56:50.550966    6652 system_pods.go:89] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.551963    6652 system_pods.go:89] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.551963    6652 system_pods.go:89] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.551963    6652 system_pods.go:89] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.551963    6652 system_pods.go:126] duration metric: took 79.6691ms to wait for k8s-apps to be running ...
	I1217 01:56:50.551963    6652 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:56:50.558963    6652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:56:50.647965    6652 system_svc.go:56] duration metric: took 96.0006ms WaitForService to wait for kubelet
	I1217 01:56:50.647965    6652 kubeadm.go:587] duration metric: took 11.8438008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:56:50.647965    6652 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:56:50.655959    6652 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1217 01:56:50.655959    6652 node_conditions.go:123] node cpu capacity is 16
	I1217 01:56:50.655959    6652 node_conditions.go:105] duration metric: took 7.9936ms to run NodePressure ...
	I1217 01:56:50.655959    6652 start.go:242] waiting for startup goroutines ...
	I1217 01:56:50.655959    6652 start.go:247] waiting for cluster config update ...
	I1217 01:56:50.655959    6652 start.go:256] writing updated cluster config ...
	I1217 01:56:50.662974    6652 ssh_runner.go:195] Run: rm -f paused
	I1217 01:56:50.670974    6652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:56:50.679961    6652 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 01:56:52.758113    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:56:51.685685   10580 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:56:51.686059   10580 start.go:159] libmachine.API.Create for "newest-cni-383500" (driver="docker")
	I1217 01:56:51.686127   10580 client.go:173] LocalClient.Create starting
	I1217 01:56:51.686740   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.687153   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.691438   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:56:51.737765   10580 cli_runner.go:211] docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:56:51.740755   10580 network_create.go:284] running [docker network inspect newest-cni-383500] to gather additional debugging logs...
	I1217 01:56:51.740755   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500
	W1217 01:56:51.801443   10580 cli_runner.go:211] docker network inspect newest-cni-383500 returned with exit code 1
	I1217 01:56:51.802437   10580 network_create.go:287] error running [docker network inspect newest-cni-383500]: docker network inspect newest-cni-383500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-383500 not found
	I1217 01:56:51.802437   10580 network_create.go:289] output of [docker network inspect newest-cni-383500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-383500 not found
	
	** /stderr **
	I1217 01:56:51.804999   10580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:56:51.880941   10580 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.896006   10580 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.908781   10580 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000faab70}
	I1217 01:56:51.908781   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1217 01:56:51.911893   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	W1217 01:56:51.964261   10580 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500 returned with exit code 1
	W1217 01:56:51.964261   10580 network_create.go:149] failed to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1217 01:56:51.964261   10580 network_create.go:116] failed to create docker network newest-cni-383500 192.168.67.0/24, will retry: subnet is taken
	I1217 01:56:51.989641   10580 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:52.003768   10580 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f5b5c0}
	I1217 01:56:52.003768   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 01:56:52.007075   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	I1217 01:56:52.149371   10580 network_create.go:108] docker network newest-cni-383500 192.168.76.0/24 created
	I1217 01:56:52.149371   10580 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-383500" container
	I1217 01:56:52.161020   10580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:56:52.221477   10580 cli_runner.go:164] Run: docker volume create newest-cni-383500 --label name.minikube.sigs.k8s.io=newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:56:52.277863   10580 oci.go:103] Successfully created a docker volume newest-cni-383500
	I1217 01:56:52.281622   10580 cli_runner.go:164] Run: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:56:53.597934   10580 cli_runner.go:217] Completed: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3162925s)
	I1217 01:56:53.597934   10580 oci.go:107] Successfully prepared a docker volume newest-cni-383500
	I1217 01:56:53.597934   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:53.597934   10580 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:56:53.602121   10580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	W1217 01:56:55.164284   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:57.657496   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:55.197325    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:57.691480    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:59.691833    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:00.414359   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:57:01.221784   10700 pod_ready.go:94] pod "coredns-66bc5c9577-rkqgn" is "Ready"
	I1217 01:57:01.221832   10700 pod_ready.go:86] duration metric: took 31.57611s for pod "coredns-66bc5c9577-rkqgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.231015   10700 pod_ready.go:83] waiting for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.305989   10700 pod_ready.go:94] pod "etcd-embed-certs-653800" is "Ready"
	I1217 01:57:01.306038   10700 pod_ready.go:86] duration metric: took 74.9721ms for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.362260   10700 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.373797   10700 pod_ready.go:94] pod "kube-apiserver-embed-certs-653800" is "Ready"
	I1217 01:57:01.373797   10700 pod_ready.go:86] duration metric: took 11.4721ms for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.379508   10700 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.421736   10700 pod_ready.go:94] pod "kube-controller-manager-embed-certs-653800" is "Ready"
	I1217 01:57:01.421778   10700 pod_ready.go:86] duration metric: took 42.2686ms for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.549272   10700 pod_ready.go:83] waiting for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.831507   10700 pod_ready.go:94] pod "kube-proxy-tnkvj" is "Ready"
	I1217 01:57:02.832053   10700 pod_ready.go:86] duration metric: took 282.7765ms for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.837864   10700 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850194   10700 pod_ready.go:94] pod "kube-scheduler-embed-certs-653800" is "Ready"
	I1217 01:57:02.850247   10700 pod_ready.go:86] duration metric: took 12.3828ms for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850295   10700 pod_ready.go:40] duration metric: took 33.2150881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:02.959538   10700 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:03.043739   10700 out.go:179] * Done! kubectl is now configured to use "embed-certs-653800" cluster and "default" namespace by default
	W1217 01:57:01.693305    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:04.195654    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:06.294817    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:08.700814    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:10.483352   10580 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (16.8803148s)
	I1217 01:57:10.483443   10580 kic.go:203] duration metric: took 16.8852234s to extract preloaded images to volume ...
	I1217 01:57:10.489300   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:57:10.753192   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:57:10.732557974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:57:10.757222   10580 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	W1217 01:57:11.205059    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:13.689668    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:11.047255   10580 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-383500 --name newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-383500 --network newest-cni-383500 --ip 192.168.76.2 --volume newest-cni-383500:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:57:11.789740   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Running}}
	I1217 01:57:11.849518   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:11.908509   10580 cli_runner.go:164] Run: docker exec newest-cni-383500 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:57:12.021676   10580 oci.go:144] the created container "newest-cni-383500" has a running status.
	I1217 01:57:12.021676   10580 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:12.131609   10580 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:57:12.208714   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:12.272788   10580 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:57:12.273496   10580 kic_runner.go:114] Args: [docker exec --privileged newest-cni-383500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:57:12.387830   10580 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:14.496810   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:14.552924   10580 machine.go:94] provisionDockerMachine start ...
	I1217 01:57:14.556597   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.614668   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.628589   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.628589   10580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:57:14.803670   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:14.803752   10580 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 01:57:14.806966   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.872659   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.873288   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.873288   10580 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 01:57:15.070847   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:15.076754   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.138180   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.138558   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.138558   10580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:57:15.322611   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:57:15.322611   10580 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:57:15.322611   10580 ubuntu.go:190] setting up certificates
	I1217 01:57:15.322611   10580 provision.go:84] configureAuth start
	I1217 01:57:15.327543   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:15.379974   10580 provision.go:143] copyHostCerts
	I1217 01:57:15.380366   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:57:15.380414   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:57:15.380832   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:57:15.382184   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:57:15.382226   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:57:15.382581   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:57:15.383683   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:57:15.383736   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:57:15.384159   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:57:15.384159   10580 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 01:57:15.508571   10580 provision.go:177] copyRemoteCerts
	I1217 01:57:15.512616   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:57:15.515422   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.573004   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:15.707286   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:57:15.746639   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:57:15.775638   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:57:15.812045   10580 provision.go:87] duration metric: took 488.4307ms to configureAuth
	I1217 01:57:15.812045   10580 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:57:15.812045   10580 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:57:15.815050   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	W1217 01:57:15.691769    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:17.697151    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:15.867044   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.867044   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.867044   10580 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:57:16.041586   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:57:16.041586   10580 ubuntu.go:71] root file system type: overlay
	I1217 01:57:16.041586   10580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:57:16.045689   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.104012   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.104611   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.104703   10580 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:57:16.297193   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:57:16.300844   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.360905   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.361498   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.361540   10580 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:57:18.042542   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:57:16.287130539 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1217 01:57:18.042542   10580 machine.go:97] duration metric: took 3.4895662s to provisionDockerMachine
	I1217 01:57:18.042542   10580 client.go:176] duration metric: took 26.3559894s to LocalClient.Create
	I1217 01:57:18.042542   10580 start.go:167] duration metric: took 26.3560942s to libmachine.API.Create "newest-cni-383500"
	I1217 01:57:18.042542   10580 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 01:57:18.042542   10580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:57:18.050002   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:57:18.053976   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.112173   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.256941   10580 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:57:18.268729   10580 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:57:18.268729   10580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 01:57:18.269469   10580 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 01:57:18.273808   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:57:18.289831   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 01:57:18.317384   10580 start.go:296] duration metric: took 274.8381ms for postStartSetup
	I1217 01:57:18.322385   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.369389   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:57:18.375387   10580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:57:18.381078   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.432604   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.561382   10580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:57:18.571573   10580 start.go:128] duration metric: took 26.8885332s to createHost
	I1217 01:57:18.571573   10580 start.go:83] releasing machines lock for "newest-cni-383500", held for 26.8886481s
	I1217 01:57:18.575096   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.630669   10580 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 01:57:18.634666   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.635666   10580 ssh_runner.go:195] Run: cat /version.json
	I1217 01:57:18.639677   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 01:57:18.859792   10580 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 01:57:18.877228   10580 ssh_runner.go:195] Run: systemctl --version
	I1217 01:57:18.892439   10580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:57:18.900947   10580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:57:18.905555   10580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:57:18.954841   10580 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:57:18.954952   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:18.955015   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:18.955015   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:18.991199   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1217 01:57:19.008171   10580 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 01:57:19.008230   10580 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 01:57:19.013119   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:57:19.028717   10580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:57:19.032858   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:57:19.052914   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.072904   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:57:19.095550   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.115854   10580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:57:19.132848   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:57:19.151846   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:57:19.172853   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:57:19.193907   10580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:57:19.210892   10580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:57:19.227892   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:19.399536   10580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:57:19.601453   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:19.601453   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:19.605450   10580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:57:19.629461   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.656299   10580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:57:19.736745   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.764285   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:57:19.789001   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:19.815453   10580 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:57:19.827238   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:57:19.842026   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 01:57:19.874597   10580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:57:20.041348   10580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:57:20.226962   10580 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:57:20.226962   10580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:57:20.254551   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:57:20.278555   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:20.468211   10580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:57:21.513591   10580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0453647s)
	I1217 01:57:21.520768   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:57:21.544117   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:57:21.578618   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:21.602252   10580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:57:21.754251   10580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:57:21.925790   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.049631   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:57:22.080439   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:57:22.102178   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.247555   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:57:22.356045   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:22.374818   10580 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:57:22.380720   10580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:57:22.388747   10580 start.go:564] Will wait 60s for crictl version
	I1217 01:57:22.393402   10580 ssh_runner.go:195] Run: which crictl
	I1217 01:57:22.405105   10580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:57:22.456110   10580 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:57:22.460422   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.517812   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.562431   10580 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 01:57:22.566477   10580 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 01:57:22.701109   10580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:57:22.707802   10580 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:57:22.717558   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:22.737642   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:22.798183   10580 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 01:57:20.222966    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:22.694494    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:23.189475    6652 pod_ready.go:94] pod "coredns-66bc5c9577-mq7nr" is "Ready"
	I1217 01:57:23.189475    6652 pod_ready.go:86] duration metric: took 32.5090332s for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.194104    6652 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.202184    6652 pod_ready.go:94] pod "etcd-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.202184    6652 pod_ready.go:86] duration metric: took 8.0443ms for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.206828    6652 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.213978    6652 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.213978    6652 pod_ready.go:86] duration metric: took 7.1505ms for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.217306    6652 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.387857    6652 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.387920    6652 pod_ready.go:86] duration metric: took 170.6119ms for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.587111    6652 pod_ready.go:83] waiting for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.985373    6652 pod_ready.go:94] pod "kube-proxy-hp6zw" is "Ready"
	I1217 01:57:23.986730    6652 pod_ready.go:86] duration metric: took 399.613ms for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.201566    6652 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586537    6652 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:24.586586    6652 pod_ready.go:86] duration metric: took 385.0143ms for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586640    6652 pod_ready.go:40] duration metric: took 33.9151651s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:24.687654    6652 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:25.088107    6652 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-278200" cluster and "default" namespace by default
	I1217 01:57:22.800238   10580 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:57:22.800267   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:57:22.804334   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.840199   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.840199   10580 docker.go:621] Images already preloaded, skipping extraction
	I1217 01:57:22.843860   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.875886   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.875953   10580 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:57:22.876007   10580 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 01:57:22.876138   10580 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:57:22.881452   10580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:57:22.963596   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:57:22.963596   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:57:22.963596   10580 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:57:22.963596   10580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:57:22.964766   10580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:57:22.971170   10580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:57:22.988148   10580 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:57:22.993571   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:57:23.008239   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 01:57:23.168781   10580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:57:23.268253   10580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 01:57:23.292920   10580 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:57:23.298948   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:23.555705   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:23.774461   10580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:57:23.797469   10580 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 01:57:23.797574   10580 certs.go:195] generating shared ca certs ...
	I1217 01:57:23.797612   10580 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:57:23.801985   10580 certs.go:257] generating profile certs ...
	I1217 01:57:23.801985   10580 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 01:57:23.802608   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt with IP's: []
	I1217 01:57:23.893499   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt ...
	I1217 01:57:23.893499   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt: {Name:mk018179fa6276f140d3c484dc77b112ade6a239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.894491   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key ...
	I1217 01:57:23.894491   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key: {Name:mkf03a928d0759f4e80338ae1a94ef05274842bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.895493   10580 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 01:57:23.895493   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 01:57:23.940939   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 ...
	I1217 01:57:23.940939   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8: {Name:mk793887fd39b61b0148eb1aef73edce147dd7af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 ...
	I1217 01:57:23.941938   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8: {Name:mk75e8d1cb53d5e553bcfb51860f15346eec2f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt
	I1217 01:57:23.956750   10580 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key
	I1217 01:57:23.958193   10580 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 01:57:23.958415   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt with IP's: []
	I1217 01:57:24.067269   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt ...
	I1217 01:57:24.067269   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt: {Name:mk21db782682ec857bcf614d6ee83e5820624361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.068316   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key ...
	I1217 01:57:24.068316   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key: {Name:mk4bcb88a5770958ea52d64f6df1b6838f8b5fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.097118   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:57:24.097649   10580 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:57:24.097791   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:57:24.098812   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:57:24.100115   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:57:24.135459   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:57:24.165011   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:57:24.192410   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:57:24.481059   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:57:25.003692   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:57:25.038428   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:57:25.065081   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:57:25.099226   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:57:25.144094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:57:25.174094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:57:25.210940   10580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:57:25.237951   10580 ssh_runner.go:195] Run: openssl version
	I1217 01:57:25.254946   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.276935   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:57:25.294948   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.302943   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.306934   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.370952   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.390944   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.415186   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.434956   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:57:25.453960   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.460961   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.464957   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.515968   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:57:25.532957   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:57:25.547952   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.565954   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:57:25.583961   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.591966   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.596965   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.654221   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:57:25.671221   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 01:57:25.688222   10580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:57:25.696236   10580 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:57:25.696236   10580 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:57:25.699225   10580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:57:25.732231   10580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:57:25.750219   10580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:57:25.764216   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:25.768221   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:25.782223   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:25.782223   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:25.787226   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:25.811226   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:25.817308   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:25.846154   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:25.861155   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:25.865166   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:25.882164   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.894161   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:25.898177   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.916173   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:25.936694   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:25.940687   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:25.956687   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:26.100043   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:57:26.198370   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:57:26.302677   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:58:51.115615    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:58:51.115718    7596 kubeadm.go:319] 
	I1217 01:58:51.115916    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:58:51.121578    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:58:51.122136    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 01:58:51.122857    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 01:58:51.123993    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 01:58:51.124691    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] OS: Linux
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:58:51.125946    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:58:51.126099    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:58:51.126099    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:58:51.128573    7596 out.go:252]   - Generating certificates and keys ...
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:58:51.129197    7596 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:58:51.129388    7596 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:58:51.129558    7596 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:58:51.129682    7596 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:58:51.130781    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:58:51.130943    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:58:51.131040    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:58:51.131231    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:58:51.131356    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:58:51.131482    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:58:51.131482    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:58:51.133818    7596 out.go:252]   - Booting up control plane ...
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:58:51.135780    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002324195s
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 
	W1217 01:58:51.136777    7596 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002324195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:58:51.139887    7596 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 01:58:51.605403    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:58:51.627327    7596 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:58:51.634266    7596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:58:51.651778    7596 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:58:51.651778    7596 kubeadm.go:158] found existing configuration files:
	
	I1217 01:58:51.657261    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:58:51.670434    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:58:51.674365    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:58:51.692907    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:58:51.707259    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:58:51.711851    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:58:51.731617    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.746650    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:58:51.750583    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.769267    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:58:51.784345    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:58:51.789034    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:58:51.805733    7596 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:58:51.926943    7596 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:58:52.006918    7596 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:58:52.107226    7596 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:27.963444   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:27.963444   10580 kubeadm.go:319] 
	I1217 02:01:27.963616   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:27.972023   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:01:27.973054   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:01:27.973281   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:01:27.973281   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:01:27.973879   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:01:27.975176   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:01:27.975817   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] OS: Linux
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:01:27.976495   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:01:27.977232   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:01:27.977413   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:01:27.979976   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 02:01:27.981175   10580 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 02:01:27.981278   10580 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.982128   10580 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 02:01:27.982285   10580 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 02:01:27.982463   10580 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 02:01:27.982622   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:01:27.983316   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:01:27.983431   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:01:27.985605   10580 out.go:252]   - Booting up control plane ...
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:01:27.986216   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:01:27.986315   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:27.987339   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000575784s
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987339   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987913   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 
	W1217 02:01:27.987913   10580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000575784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 02:01:27.992425   10580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 02:01:28.454931   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:01:28.474574   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:01:28.479997   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:01:28.494933   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:01:28.494933   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 02:01:28.501352   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:01:28.516227   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:01:28.521874   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:01:28.540752   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:01:28.554535   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:01:28.559019   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:01:28.577479   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.592775   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:01:28.596757   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.614687   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:01:28.629343   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:01:28.633759   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:01:28.653776   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:01:28.777097   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 02:01:28.860083   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:28.960806   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:02:52.901103    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:02:52.901187    7596 kubeadm.go:319] 
	I1217 02:02:52.901405    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:02:52.906962    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:02:52.907051    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:02:52.907051    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:02:52.907664    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:02:52.908322    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:02:52.908447    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:02:52.908571    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:02:52.908730    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:02:52.908849    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:02:52.909000    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] OS: Linux
	I1217 02:02:52.909731    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:02:52.910342    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:02:52.911109    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:02:52.911252    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:02:52.914099    7596 out.go:252]   - Generating certificates and keys ...
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:02:52.915926    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:02:52.916016    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:02:52.918827    7596 out.go:252]   - Booting up control plane ...
	I1217 02:02:52.918827    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:02:52.920875    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000516808s
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 
	I1217 02:02:52.921883    7596 kubeadm.go:403] duration metric: took 8m4.1597601s to StartCluster
	I1217 02:02:52.921883    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:52.925883    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:52.985042    7596 cri.go:89] found id: ""
	I1217 02:02:52.985042    7596 logs.go:282] 0 containers: []
	W1217 02:02:52.985042    7596 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:52.985042    7596 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:52.989497    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:53.035444    7596 cri.go:89] found id: ""
	I1217 02:02:53.035444    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.035444    7596 logs.go:284] No container was found matching "etcd"
	I1217 02:02:53.035444    7596 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:53.040633    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:53.090166    7596 cri.go:89] found id: ""
	I1217 02:02:53.090166    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.090166    7596 logs.go:284] No container was found matching "coredns"
	I1217 02:02:53.090166    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:53.095276    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:53.155229    7596 cri.go:89] found id: ""
	I1217 02:02:53.155292    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.155292    7596 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:53.155292    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:53.159579    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:53.201389    7596 cri.go:89] found id: ""
	I1217 02:02:53.201389    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.201389    7596 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:53.201389    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:53.206627    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:53.251727    7596 cri.go:89] found id: ""
	I1217 02:02:53.251807    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.251807    7596 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:53.251807    7596 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:53.255868    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:53.296927    7596 cri.go:89] found id: ""
	I1217 02:02:53.297002    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.297002    7596 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:53.297002    7596 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:53.297002    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:53.362489    7596 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:53.362489    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:53.402379    7596 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:53.402379    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:53.486459    7596 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:53.486459    7596 logs.go:123] Gathering logs for Docker ...
	I1217 02:02:53.486459    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:02:53.519898    7596 logs.go:123] Gathering logs for container status ...
	I1217 02:02:53.519898    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:02:53.571631    7596 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:02:53.571705    7596 out.go:285] * 
	W1217 02:02:53.571763    7596 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.571763    7596 out.go:285] * 
	W1217 02:02:53.573684    7596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:02:53.577599    7596 out.go:203] 
	W1217 02:02:53.580937    7596 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.580937    7596 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:02:53.580937    7596 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:02:53.584112    7596 out.go:203] 
	
	
	==> Docker <==
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638787318Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638875828Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638886629Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638892529Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638897830Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638925533Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638969938Z" level=info msg="Initializing buildkit"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.814190912Z" level=info msg="Completed buildkit initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834145684Z" level=info msg="Daemon has completed initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834353706Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834360607Z" level=info msg="API listen on [::]:2376"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834438816Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Loaded network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:02:58.535196   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:58.536137   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:58.541016   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:58.541938   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:58.544455   11178 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.736198] tmpfs: Unknown parameter 'noswap'
	[  +0.306826] CPU: 13 PID: 440898 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7f86f2041b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f86f2041af6.
	[  +0.000001] RSP: 002b:00007ffdf29d7630 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +1.037447] CPU: 4 PID: 441085 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fed1ac73b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fed1ac73af6.
	[  +0.000001] RSP: 002b:00007fff679e5600 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +20.473571] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 02:02:58 up  2:22,  0 user,  load average: 0.75, 2.44, 3.51
	Linux no-preload-184000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:02:55 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:56 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 324.
	Dec 17 02:02:56 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:56 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:56 no-preload-184000 kubelet[10999]: E1217 02:02:56.142553   10999 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:56 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:56 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:56 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 325.
	Dec 17 02:02:56 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:56 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:56 no-preload-184000 kubelet[11029]: E1217 02:02:56.873356   11029 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:56 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:56 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:57 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 326.
	Dec 17 02:02:57 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:57 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:57 no-preload-184000 kubelet[11059]: E1217 02:02:57.619686   11059 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:57 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:57 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:58 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 327.
	Dec 17 02:02:58 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:58 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:58 no-preload-184000 kubelet[11130]: E1217 02:02:58.384188   11130 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:58 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:58 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 6 (598.7463ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:02:59.564952    7224 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-184000
helpers_test.go:244: (dbg) docker inspect no-preload-184000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed",
	        "Created": "2025-12-17T01:54:01.802457191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 400896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:54:02.102156548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hosts",
	        "LogPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed-json.log",
	        "Name": "/no-preload-184000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-184000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-184000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-184000",
	                "Source": "/var/lib/docker/volumes/no-preload-184000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-184000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-184000",
	                "name.minikube.sigs.k8s.io": "no-preload-184000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "878415a4285bb4e9322b366762510a9c3489066b0ef84b5d48358f5f81e082bf",
	            "SandboxKey": "/var/run/docker/netns/878415a4285b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62904"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62905"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62907"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62908"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-184000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6adb91d102dfa92bfa154127e93e39401be06a5d21df5043f3e85e012e93e321",
	                    "EndpointID": "8e3f71a707f374d60db9e819d8097a078527854d326de7a03065e5d1fcc8c8bd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-184000",
	                        "335cbfb80690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 6 (588.2622ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:03:00.224007    3028 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/DeployApp]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25: (1.0884672s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p old-k8s-version-044000 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0        │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable metrics-server -p embed-certs-653800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ stop    │ -p embed-certs-653800 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-653800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ stop    │ -p default-k8s-diff-port-278200 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-278200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:56:50
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:56:50.801354   10580 out.go:360] Setting OutFile to fd 1172 ...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.842347   10580 out.go:374] Setting ErrFile to fd 824...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.868487   10580 out.go:368] Setting JSON to false
	I1217 01:56:50.873633   10580 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8199,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:56:50.873795   10580 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:56:50.877230   10580 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:56:50.879602   10580 notify.go:221] Checking for updates...
	I1217 01:56:50.882592   10580 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:56:50.886357   10580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:56:50.888496   10580 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:56:50.891194   10580 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:56:50.892900   10580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "default-k8s-diff-port-278200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "embed-certs-653800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.898014   10580 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:56:50.898014   10580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:56:51.023603   10580 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:56:51.027600   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.269309   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.250186339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.271302   10580 out.go:179] * Using the docker driver based on user configuration
	I1217 01:56:51.274302   10580 start.go:309] selected driver: docker
	I1217 01:56:51.274302   10580 start.go:927] validating driver "docker" against <nil>
	I1217 01:56:51.274302   10580 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:56:51.315871   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.584149   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.563534441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.584149   10580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:56:51.584149   10580 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:56:51.585155   10580 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:56:51.589148   10580 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:56:51.590146   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:56:51.591150   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:56:51.591150   10580 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:56:51.591150   10580 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:56:51.593150   10580 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 01:56:51.596146   10580 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:56:51.597151   10580 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:56:51.600152   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:51.600152   10580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:56:51.600152   10580 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 01:56:51.600152   10580 cache.go:65] Caching tarball of preloaded images
	I1217 01:56:51.600152   10580 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:56:51.600152   10580 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 01:56:51.601151   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:56:51.601151   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json: {Name:mkf80e0956bcb8fe665f18deea862644aea3658c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:56:51.682130   10580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:56:51.682186   10580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:56:51.682226   10580 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:56:51.682296   10580 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:56:51.682463   10580 start.go:364] duration metric: took 127.8µs to acquireMachinesLock for "newest-cni-383500"
	I1217 01:56:51.682643   10580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:56:51.682643   10580 start.go:125] createHost starting for "" (driver="docker")
	W1217 01:56:50.658968   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:53.155347   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:56:50.357392    6652 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63284/healthz ...
	I1217 01:56:50.369628    6652 api_server.go:279] https://127.0.0.1:63284/healthz returned 200:
	ok
	I1217 01:56:50.373212    6652 api_server.go:141] control plane version: v1.34.2
	I1217 01:56:50.373212    6652 api_server.go:131] duration metric: took 1.5164341s to wait for apiserver health ...
	I1217 01:56:50.373212    6652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:56:50.383881    6652 system_pods.go:59] 8 kube-system pods found
	I1217 01:56:50.383935    6652 system_pods.go:61] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.383972    6652 system_pods.go:61] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.383972    6652 system_pods.go:61] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.383972    6652 system_pods.go:61] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.383972    6652 system_pods.go:74] duration metric: took 10.76ms to wait for pod list to return data ...
	I1217 01:56:50.383972    6652 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:56:50.472293    6652 default_sa.go:45] found service account: "default"
	I1217 01:56:50.472293    6652 default_sa.go:55] duration metric: took 88.3195ms for default service account to be created ...
	I1217 01:56:50.472293    6652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:56:50.550966    6652 system_pods.go:86] 8 kube-system pods found
	I1217 01:56:50.550966    6652 system_pods.go:89] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.551963    6652 system_pods.go:89] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.551963    6652 system_pods.go:89] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.551963    6652 system_pods.go:89] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.551963    6652 system_pods.go:126] duration metric: took 79.6691ms to wait for k8s-apps to be running ...
	I1217 01:56:50.551963    6652 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:56:50.558963    6652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:56:50.647965    6652 system_svc.go:56] duration metric: took 96.0006ms WaitForService to wait for kubelet
	I1217 01:56:50.647965    6652 kubeadm.go:587] duration metric: took 11.8438008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:56:50.647965    6652 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:56:50.655959    6652 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1217 01:56:50.655959    6652 node_conditions.go:123] node cpu capacity is 16
	I1217 01:56:50.655959    6652 node_conditions.go:105] duration metric: took 7.9936ms to run NodePressure ...
	I1217 01:56:50.655959    6652 start.go:242] waiting for startup goroutines ...
	I1217 01:56:50.655959    6652 start.go:247] waiting for cluster config update ...
	I1217 01:56:50.655959    6652 start.go:256] writing updated cluster config ...
	I1217 01:56:50.662974    6652 ssh_runner.go:195] Run: rm -f paused
	I1217 01:56:50.670974    6652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:56:50.679961    6652 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 01:56:52.758113    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:56:51.685685   10580 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:56:51.686059   10580 start.go:159] libmachine.API.Create for "newest-cni-383500" (driver="docker")
	I1217 01:56:51.686127   10580 client.go:173] LocalClient.Create starting
	I1217 01:56:51.686740   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.687153   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.691438   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:56:51.737765   10580 cli_runner.go:211] docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:56:51.740755   10580 network_create.go:284] running [docker network inspect newest-cni-383500] to gather additional debugging logs...
	I1217 01:56:51.740755   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500
	W1217 01:56:51.801443   10580 cli_runner.go:211] docker network inspect newest-cni-383500 returned with exit code 1
	I1217 01:56:51.802437   10580 network_create.go:287] error running [docker network inspect newest-cni-383500]: docker network inspect newest-cni-383500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-383500 not found
	I1217 01:56:51.802437   10580 network_create.go:289] output of [docker network inspect newest-cni-383500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-383500 not found
	
	** /stderr **
	I1217 01:56:51.804999   10580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:56:51.880941   10580 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.896006   10580 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.908781   10580 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000faab70}
	I1217 01:56:51.908781   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1217 01:56:51.911893   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	W1217 01:56:51.964261   10580 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500 returned with exit code 1
	W1217 01:56:51.964261   10580 network_create.go:149] failed to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1217 01:56:51.964261   10580 network_create.go:116] failed to create docker network newest-cni-383500 192.168.67.0/24, will retry: subnet is taken
	I1217 01:56:51.989641   10580 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:52.003768   10580 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f5b5c0}
	I1217 01:56:52.003768   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 01:56:52.007075   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	I1217 01:56:52.149371   10580 network_create.go:108] docker network newest-cni-383500 192.168.76.0/24 created
	I1217 01:56:52.149371   10580 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-383500" container
	I1217 01:56:52.161020   10580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:56:52.221477   10580 cli_runner.go:164] Run: docker volume create newest-cni-383500 --label name.minikube.sigs.k8s.io=newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:56:52.277863   10580 oci.go:103] Successfully created a docker volume newest-cni-383500
	I1217 01:56:52.281622   10580 cli_runner.go:164] Run: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:56:53.597934   10580 cli_runner.go:217] Completed: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3162925s)
	I1217 01:56:53.597934   10580 oci.go:107] Successfully prepared a docker volume newest-cni-383500
	I1217 01:56:53.597934   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:53.597934   10580 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:56:53.602121   10580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	W1217 01:56:55.164284   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:57.657496   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:55.197325    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:57.691480    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:59.691833    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:00.414359   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:57:01.221784   10700 pod_ready.go:94] pod "coredns-66bc5c9577-rkqgn" is "Ready"
	I1217 01:57:01.221832   10700 pod_ready.go:86] duration metric: took 31.57611s for pod "coredns-66bc5c9577-rkqgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.231015   10700 pod_ready.go:83] waiting for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.305989   10700 pod_ready.go:94] pod "etcd-embed-certs-653800" is "Ready"
	I1217 01:57:01.306038   10700 pod_ready.go:86] duration metric: took 74.9721ms for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.362260   10700 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.373797   10700 pod_ready.go:94] pod "kube-apiserver-embed-certs-653800" is "Ready"
	I1217 01:57:01.373797   10700 pod_ready.go:86] duration metric: took 11.4721ms for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.379508   10700 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.421736   10700 pod_ready.go:94] pod "kube-controller-manager-embed-certs-653800" is "Ready"
	I1217 01:57:01.421778   10700 pod_ready.go:86] duration metric: took 42.2686ms for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.549272   10700 pod_ready.go:83] waiting for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.831507   10700 pod_ready.go:94] pod "kube-proxy-tnkvj" is "Ready"
	I1217 01:57:02.832053   10700 pod_ready.go:86] duration metric: took 282.7765ms for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.837864   10700 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850194   10700 pod_ready.go:94] pod "kube-scheduler-embed-certs-653800" is "Ready"
	I1217 01:57:02.850247   10700 pod_ready.go:86] duration metric: took 12.3828ms for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850295   10700 pod_ready.go:40] duration metric: took 33.2150881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:02.959538   10700 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:03.043739   10700 out.go:179] * Done! kubectl is now configured to use "embed-certs-653800" cluster and "default" namespace by default
	W1217 01:57:01.693305    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:04.195654    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:06.294817    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:08.700814    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:10.483352   10580 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (16.8803148s)
	I1217 01:57:10.483443   10580 kic.go:203] duration metric: took 16.8852234s to extract preloaded images to volume ...
	I1217 01:57:10.489300   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:57:10.753192   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:57:10.732557974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:57:10.757222   10580 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	W1217 01:57:11.205059    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:13.689668    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:11.047255   10580 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-383500 --name newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-383500 --network newest-cni-383500 --ip 192.168.76.2 --volume newest-cni-383500:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:57:11.789740   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Running}}
	I1217 01:57:11.849518   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:11.908509   10580 cli_runner.go:164] Run: docker exec newest-cni-383500 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:57:12.021676   10580 oci.go:144] the created container "newest-cni-383500" has a running status.
	I1217 01:57:12.021676   10580 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:12.131609   10580 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:57:12.208714   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:12.272788   10580 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:57:12.273496   10580 kic_runner.go:114] Args: [docker exec --privileged newest-cni-383500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:57:12.387830   10580 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:14.496810   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:14.552924   10580 machine.go:94] provisionDockerMachine start ...
	I1217 01:57:14.556597   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.614668   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.628589   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.628589   10580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:57:14.803670   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:14.803752   10580 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 01:57:14.806966   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.872659   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.873288   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.873288   10580 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 01:57:15.070847   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:15.076754   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.138180   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.138558   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.138558   10580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:57:15.322611   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:57:15.322611   10580 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:57:15.322611   10580 ubuntu.go:190] setting up certificates
	I1217 01:57:15.322611   10580 provision.go:84] configureAuth start
	I1217 01:57:15.327543   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:15.379974   10580 provision.go:143] copyHostCerts
	I1217 01:57:15.380366   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:57:15.380414   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:57:15.380832   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:57:15.382184   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:57:15.382226   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:57:15.382581   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:57:15.383683   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:57:15.383736   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:57:15.384159   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:57:15.384159   10580 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 01:57:15.508571   10580 provision.go:177] copyRemoteCerts
	I1217 01:57:15.512616   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:57:15.515422   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.573004   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:15.707286   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:57:15.746639   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:57:15.775638   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:57:15.812045   10580 provision.go:87] duration metric: took 488.4307ms to configureAuth
	I1217 01:57:15.812045   10580 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:57:15.812045   10580 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:57:15.815050   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	W1217 01:57:15.691769    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:17.697151    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:15.867044   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.867044   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.867044   10580 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:57:16.041586   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:57:16.041586   10580 ubuntu.go:71] root file system type: overlay
	I1217 01:57:16.041586   10580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:57:16.045689   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.104012   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.104611   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.104703   10580 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:57:16.297193   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:57:16.300844   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.360905   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.361498   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.361540   10580 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:57:18.042542   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:57:16.287130539 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1217 01:57:18.042542   10580 machine.go:97] duration metric: took 3.4895662s to provisionDockerMachine
	I1217 01:57:18.042542   10580 client.go:176] duration metric: took 26.3559894s to LocalClient.Create
	I1217 01:57:18.042542   10580 start.go:167] duration metric: took 26.3560942s to libmachine.API.Create "newest-cni-383500"
	I1217 01:57:18.042542   10580 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 01:57:18.042542   10580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:57:18.050002   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:57:18.053976   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.112173   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.256941   10580 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:57:18.268729   10580 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:57:18.268729   10580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 01:57:18.269469   10580 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 01:57:18.273808   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:57:18.289831   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 01:57:18.317384   10580 start.go:296] duration metric: took 274.8381ms for postStartSetup
	I1217 01:57:18.322385   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.369389   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:57:18.375387   10580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:57:18.381078   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.432604   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.561382   10580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:57:18.571573   10580 start.go:128] duration metric: took 26.8885332s to createHost
	I1217 01:57:18.571573   10580 start.go:83] releasing machines lock for "newest-cni-383500", held for 26.8886481s
	I1217 01:57:18.575096   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.630669   10580 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 01:57:18.634666   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.635666   10580 ssh_runner.go:195] Run: cat /version.json
	I1217 01:57:18.639677   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 01:57:18.859792   10580 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 01:57:18.877228   10580 ssh_runner.go:195] Run: systemctl --version
	I1217 01:57:18.892439   10580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:57:18.900947   10580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:57:18.905555   10580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:57:18.954841   10580 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:57:18.954952   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:18.955015   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:18.955015   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:18.991199   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1217 01:57:19.008171   10580 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 01:57:19.008230   10580 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 01:57:19.013119   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:57:19.028717   10580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:57:19.032858   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:57:19.052914   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.072904   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:57:19.095550   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.115854   10580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:57:19.132848   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:57:19.151846   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:57:19.172853   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:57:19.193907   10580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:57:19.210892   10580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:57:19.227892   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:19.399536   10580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:57:19.601453   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:19.601453   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:19.605450   10580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:57:19.629461   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.656299   10580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:57:19.736745   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.764285   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:57:19.789001   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:19.815453   10580 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:57:19.827238   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:57:19.842026   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 01:57:19.874597   10580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:57:20.041348   10580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:57:20.226962   10580 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:57:20.226962   10580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:57:20.254551   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:57:20.278555   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:20.468211   10580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:57:21.513591   10580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0453647s)
	I1217 01:57:21.520768   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:57:21.544117   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:57:21.578618   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:21.602252   10580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:57:21.754251   10580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:57:21.925790   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.049631   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:57:22.080439   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:57:22.102178   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.247555   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:57:22.356045   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:22.374818   10580 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:57:22.380720   10580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:57:22.388747   10580 start.go:564] Will wait 60s for crictl version
	I1217 01:57:22.393402   10580 ssh_runner.go:195] Run: which crictl
	I1217 01:57:22.405105   10580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:57:22.456110   10580 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:57:22.460422   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.517812   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.562431   10580 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 01:57:22.566477   10580 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 01:57:22.701109   10580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:57:22.707802   10580 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:57:22.717558   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:22.737642   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:22.798183   10580 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 01:57:20.222966    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:22.694494    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:23.189475    6652 pod_ready.go:94] pod "coredns-66bc5c9577-mq7nr" is "Ready"
	I1217 01:57:23.189475    6652 pod_ready.go:86] duration metric: took 32.5090332s for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.194104    6652 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.202184    6652 pod_ready.go:94] pod "etcd-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.202184    6652 pod_ready.go:86] duration metric: took 8.0443ms for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.206828    6652 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.213978    6652 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.213978    6652 pod_ready.go:86] duration metric: took 7.1505ms for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.217306    6652 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.387857    6652 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.387920    6652 pod_ready.go:86] duration metric: took 170.6119ms for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.587111    6652 pod_ready.go:83] waiting for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.985373    6652 pod_ready.go:94] pod "kube-proxy-hp6zw" is "Ready"
	I1217 01:57:23.986730    6652 pod_ready.go:86] duration metric: took 399.613ms for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.201566    6652 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586537    6652 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:24.586586    6652 pod_ready.go:86] duration metric: took 385.0143ms for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586640    6652 pod_ready.go:40] duration metric: took 33.9151651s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:24.687654    6652 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:25.088107    6652 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-278200" cluster and "default" namespace by default
	I1217 01:57:22.800238   10580 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:57:22.800267   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:57:22.804334   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.840199   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.840199   10580 docker.go:621] Images already preloaded, skipping extraction
	I1217 01:57:22.843860   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.875886   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.875953   10580 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:57:22.876007   10580 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 01:57:22.876138   10580 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:57:22.881452   10580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:57:22.963596   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:57:22.963596   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:57:22.963596   10580 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:57:22.963596   10580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:57:22.964766   10580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:57:22.971170   10580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:57:22.988148   10580 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:57:22.993571   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:57:23.008239   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 01:57:23.168781   10580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:57:23.268253   10580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 01:57:23.292920   10580 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:57:23.298948   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:23.555705   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:23.774461   10580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:57:23.797469   10580 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 01:57:23.797574   10580 certs.go:195] generating shared ca certs ...
	I1217 01:57:23.797612   10580 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:57:23.801985   10580 certs.go:257] generating profile certs ...
	I1217 01:57:23.801985   10580 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 01:57:23.802608   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt with IP's: []
	I1217 01:57:23.893499   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt ...
	I1217 01:57:23.893499   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt: {Name:mk018179fa6276f140d3c484dc77b112ade6a239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.894491   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key ...
	I1217 01:57:23.894491   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key: {Name:mkf03a928d0759f4e80338ae1a94ef05274842bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.895493   10580 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 01:57:23.895493   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 01:57:23.940939   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 ...
	I1217 01:57:23.940939   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8: {Name:mk793887fd39b61b0148eb1aef73edce147dd7af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 ...
	I1217 01:57:23.941938   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8: {Name:mk75e8d1cb53d5e553bcfb51860f15346eec2f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt
	I1217 01:57:23.956750   10580 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key
	I1217 01:57:23.958193   10580 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 01:57:23.958415   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt with IP's: []
	I1217 01:57:24.067269   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt ...
	I1217 01:57:24.067269   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt: {Name:mk21db782682ec857bcf614d6ee83e5820624361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.068316   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key ...
	I1217 01:57:24.068316   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key: {Name:mk4bcb88a5770958ea52d64f6df1b6838f8b5fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.097118   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:57:24.097649   10580 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:57:24.097791   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:57:24.098812   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:57:24.100115   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:57:24.135459   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:57:24.165011   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:57:24.192410   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:57:24.481059   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:57:25.003692   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:57:25.038428   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:57:25.065081   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:57:25.099226   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:57:25.144094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:57:25.174094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:57:25.210940   10580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:57:25.237951   10580 ssh_runner.go:195] Run: openssl version
	I1217 01:57:25.254946   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.276935   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:57:25.294948   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.302943   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.306934   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.370952   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.390944   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.415186   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.434956   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:57:25.453960   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.460961   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.464957   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.515968   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:57:25.532957   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:57:25.547952   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.565954   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:57:25.583961   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.591966   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.596965   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.654221   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:57:25.671221   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 01:57:25.688222   10580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:57:25.696236   10580 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:57:25.696236   10580 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:57:25.699225   10580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:57:25.732231   10580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:57:25.750219   10580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:57:25.764216   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:25.768221   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:25.782223   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:25.782223   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:25.787226   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:25.811226   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:25.817308   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:25.846154   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:25.861155   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:25.865166   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:25.882164   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.894161   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:25.898177   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.916173   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:25.936694   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:25.940687   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:25.956687   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:26.100043   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:57:26.198370   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:57:26.302677   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:58:51.115615    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:58:51.115718    7596 kubeadm.go:319] 
	I1217 01:58:51.115916    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:58:51.121578    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:58:51.122136    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 01:58:51.122857    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 01:58:51.123993    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 01:58:51.124691    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] OS: Linux
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:58:51.125946    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:58:51.126099    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:58:51.126099    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:58:51.128573    7596 out.go:252]   - Generating certificates and keys ...
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:58:51.129197    7596 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:58:51.129388    7596 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:58:51.129558    7596 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:58:51.129682    7596 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:58:51.130781    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:58:51.130943    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:58:51.131040    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:58:51.131231    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:58:51.131356    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:58:51.131482    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:58:51.131482    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:58:51.133818    7596 out.go:252]   - Booting up control plane ...
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:58:51.135780    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002324195s
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 
	W1217 01:58:51.136777    7596 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002324195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:58:51.139887    7596 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 01:58:51.605403    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:58:51.627327    7596 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:58:51.634266    7596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:58:51.651778    7596 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:58:51.651778    7596 kubeadm.go:158] found existing configuration files:
	
	I1217 01:58:51.657261    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:58:51.670434    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:58:51.674365    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:58:51.692907    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:58:51.707259    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:58:51.711851    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:58:51.731617    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.746650    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:58:51.750583    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.769267    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:58:51.784345    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:58:51.789034    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:58:51.805733    7596 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:58:51.926943    7596 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:58:52.006918    7596 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:58:52.107226    7596 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:27.963444   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:27.963444   10580 kubeadm.go:319] 
	I1217 02:01:27.963616   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:27.972023   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:01:27.973054   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:01:27.973281   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:01:27.973281   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:01:27.973879   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:01:27.975176   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:01:27.975817   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] OS: Linux
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:01:27.976495   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:01:27.977232   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:01:27.977413   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:01:27.979976   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 02:01:27.981175   10580 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 02:01:27.981278   10580 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.982128   10580 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 02:01:27.982285   10580 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 02:01:27.982463   10580 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 02:01:27.982622   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:01:27.983316   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:01:27.983431   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:01:27.985605   10580 out.go:252]   - Booting up control plane ...
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:01:27.986216   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:01:27.986315   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:27.987339   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000575784s
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987339   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987913   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 
	W1217 02:01:27.987913   10580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000575784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 02:01:27.992425   10580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 02:01:28.454931   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:01:28.474574   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:01:28.479997   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:01:28.494933   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:01:28.494933   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 02:01:28.501352   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:01:28.516227   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:01:28.521874   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:01:28.540752   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:01:28.554535   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:01:28.559019   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:01:28.577479   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.592775   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:01:28.596757   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.614687   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:01:28.629343   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:01:28.633759   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:01:28.653776   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:01:28.777097   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 02:01:28.860083   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:28.960806   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:02:52.901103    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:02:52.901187    7596 kubeadm.go:319] 
	I1217 02:02:52.901405    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:02:52.906962    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:02:52.907051    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:02:52.907051    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:02:52.907664    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:02:52.908322    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:02:52.908447    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:02:52.908571    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:02:52.908730    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:02:52.908849    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:02:52.909000    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] OS: Linux
	I1217 02:02:52.909731    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:02:52.910342    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:02:52.911109    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:02:52.911252    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:02:52.914099    7596 out.go:252]   - Generating certificates and keys ...
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:02:52.915926    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:02:52.916016    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:02:52.918827    7596 out.go:252]   - Booting up control plane ...
	I1217 02:02:52.918827    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:02:52.920875    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000516808s
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 
	I1217 02:02:52.921883    7596 kubeadm.go:403] duration metric: took 8m4.1597601s to StartCluster
	I1217 02:02:52.921883    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:52.925883    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:52.985042    7596 cri.go:89] found id: ""
	I1217 02:02:52.985042    7596 logs.go:282] 0 containers: []
	W1217 02:02:52.985042    7596 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:52.985042    7596 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:52.989497    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:53.035444    7596 cri.go:89] found id: ""
	I1217 02:02:53.035444    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.035444    7596 logs.go:284] No container was found matching "etcd"
	I1217 02:02:53.035444    7596 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:53.040633    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:53.090166    7596 cri.go:89] found id: ""
	I1217 02:02:53.090166    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.090166    7596 logs.go:284] No container was found matching "coredns"
	I1217 02:02:53.090166    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:53.095276    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:53.155229    7596 cri.go:89] found id: ""
	I1217 02:02:53.155292    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.155292    7596 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:53.155292    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:53.159579    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:53.201389    7596 cri.go:89] found id: ""
	I1217 02:02:53.201389    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.201389    7596 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:53.201389    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:53.206627    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:53.251727    7596 cri.go:89] found id: ""
	I1217 02:02:53.251807    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.251807    7596 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:53.251807    7596 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:53.255868    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:53.296927    7596 cri.go:89] found id: ""
	I1217 02:02:53.297002    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.297002    7596 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:53.297002    7596 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:53.297002    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:53.362489    7596 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:53.362489    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:53.402379    7596 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:53.402379    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:53.486459    7596 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:53.486459    7596 logs.go:123] Gathering logs for Docker ...
	I1217 02:02:53.486459    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:02:53.519898    7596 logs.go:123] Gathering logs for container status ...
	I1217 02:02:53.519898    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:02:53.571631    7596 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:02:53.571705    7596 out.go:285] * 
	W1217 02:02:53.571763    7596 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.571763    7596 out.go:285] * 
	W1217 02:02:53.573684    7596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:02:53.577599    7596 out.go:203] 
	W1217 02:02:53.580937    7596 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
E1217 02:03:01.648810    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.580937    7596 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:02:53.580937    7596 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:02:53.584112    7596 out.go:203] 
	
	
	==> Docker <==
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638787318Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638875828Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638886629Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638892529Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638897830Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638925533Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638969938Z" level=info msg="Initializing buildkit"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.814190912Z" level=info msg="Completed buildkit initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834145684Z" level=info msg="Daemon has completed initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834353706Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834360607Z" level=info msg="API listen on [::]:2376"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834438816Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Loaded network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:03:01.227209   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:01.228389   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:01.229381   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:01.230956   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:03:01.232231   11359 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.736198] tmpfs: Unknown parameter 'noswap'
	[  +0.306826] CPU: 13 PID: 440898 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7f86f2041b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f86f2041af6.
	[  +0.000001] RSP: 002b:00007ffdf29d7630 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +1.037447] CPU: 4 PID: 441085 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fed1ac73b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fed1ac73af6.
	[  +0.000001] RSP: 002b:00007fff679e5600 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +20.473571] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 02:03:01 up  2:22,  0 user,  load average: 0.77, 2.41, 3.49
	Linux no-preload-184000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:02:58 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:58 no-preload-184000 kubelet[11130]: E1217 02:02:58.384188   11130 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:58 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:58 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:59 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 328.
	Dec 17 02:02:59 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:59 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:59 no-preload-184000 kubelet[11198]: E1217 02:02:59.128090   11198 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:59 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:59 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:02:59 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 329.
	Dec 17 02:02:59 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:59 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:02:59 no-preload-184000 kubelet[11225]: E1217 02:02:59.867339   11225 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:02:59 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:02:59 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:00 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 330.
	Dec 17 02:03:00 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:00 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:00 no-preload-184000 kubelet[11255]: E1217 02:03:00.647645   11255 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:03:00 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:03:00 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:03:01 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 331.
	Dec 17 02:03:01 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:03:01 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 6 (589.4273ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:03:02.234155   10268 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/DeployApp (5.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (117.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 02:03:04.321409    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:03:07.197921    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:03:14.169212    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:03:32.030671    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:03:44.825379    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-278200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:03:46.392187    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:04:01.843321    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:04:14.100194    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m55.0559642s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_13.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-184000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-184000 describe deploy/metrics-server -n kube-system: exit status 1 (94.1999ms)

                                                
                                                
** stderr ** 
	error: context "no-preload-184000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-184000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-184000
helpers_test.go:244: (dbg) docker inspect no-preload-184000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed",
	        "Created": "2025-12-17T01:54:01.802457191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 400896,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:54:02.102156548Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hosts",
	        "LogPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed-json.log",
	        "Name": "/no-preload-184000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-184000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-184000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-184000",
	                "Source": "/var/lib/docker/volumes/no-preload-184000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-184000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-184000",
	                "name.minikube.sigs.k8s.io": "no-preload-184000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "878415a4285bb4e9322b366762510a9c3489066b0ef84b5d48358f5f81e082bf",
	            "SandboxKey": "/var/run/docker/netns/878415a4285b",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62904"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62905"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62906"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62907"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "62908"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-184000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6adb91d102dfa92bfa154127e93e39401be06a5d21df5043f3e85e012e93e321",
	                    "EndpointID": "8e3f71a707f374d60db9e819d8097a078527854d326de7a03065e5d1fcc8c8bd",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-184000",
	                        "335cbfb80690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 6 (568.2772ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:04:58.042681   10676 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25: (1.1320141s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p embed-certs-653800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                   │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:55 UTC │
	│ stop    │ -p embed-certs-653800 --alsologtostderr -v=3                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:55 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p embed-certs-653800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                              │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ stop    │ -p default-k8s-diff-port-278200 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-278200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 01:56:50
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 01:56:50.801354   10580 out.go:360] Setting OutFile to fd 1172 ...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.842347   10580 out.go:374] Setting ErrFile to fd 824...
	I1217 01:56:50.842347   10580 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:56:50.868487   10580 out.go:368] Setting JSON to false
	I1217 01:56:50.873633   10580 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8199,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 01:56:50.873795   10580 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 01:56:50.877230   10580 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 01:56:50.879602   10580 notify.go:221] Checking for updates...
	I1217 01:56:50.882592   10580 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 01:56:50.886357   10580 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 01:56:50.888496   10580 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 01:56:50.891194   10580 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 01:56:50.892900   10580 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "default-k8s-diff-port-278200": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.897014   10580 config.go:182] Loaded profile config "embed-certs-653800": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:56:50.898014   10580 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:56:50.898014   10580 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 01:56:51.023603   10580 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 01:56:51.027600   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.269309   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.250186339 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.271302   10580 out.go:179] * Using the docker driver based on user configuration
	I1217 01:56:51.274302   10580 start.go:309] selected driver: docker
	I1217 01:56:51.274302   10580 start.go:927] validating driver "docker" against <nil>
	I1217 01:56:51.274302   10580 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 01:56:51.315871   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:56:51.584149   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:92 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:56:51.563534441 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:56:51.584149   10580 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1217 01:56:51.584149   10580 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1217 01:56:51.585155   10580 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 01:56:51.589148   10580 out.go:179] * Using Docker Desktop driver with root privileges
	I1217 01:56:51.590146   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:56:51.591150   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:56:51.591150   10580 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 01:56:51.591150   10580 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwar
ePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:56:51.593150   10580 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 01:56:51.596146   10580 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 01:56:51.597151   10580 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 01:56:51.600152   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:51.600152   10580 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 01:56:51.600152   10580 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 01:56:51.600152   10580 cache.go:65] Caching tarball of preloaded images
	I1217 01:56:51.600152   10580 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 01:56:51.600152   10580 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 01:56:51.601151   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:56:51.601151   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json: {Name:mkf80e0956bcb8fe665f18deea862644aea3658c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:56:51.682130   10580 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 01:56:51.682186   10580 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 01:56:51.682226   10580 cache.go:243] Successfully downloaded all kic artifacts
	I1217 01:56:51.682296   10580 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 01:56:51.682463   10580 start.go:364] duration metric: took 127.8µs to acquireMachinesLock for "newest-cni-383500"
	I1217 01:56:51.682643   10580 start.go:93] Provisioning new machine with config: &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 01:56:51.682643   10580 start.go:125] createHost starting for "" (driver="docker")
	W1217 01:56:50.658968   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:53.155347   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:56:50.357392    6652 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:63284/healthz ...
	I1217 01:56:50.369628    6652 api_server.go:279] https://127.0.0.1:63284/healthz returned 200:
	ok
	I1217 01:56:50.373212    6652 api_server.go:141] control plane version: v1.34.2
	I1217 01:56:50.373212    6652 api_server.go:131] duration metric: took 1.5164341s to wait for apiserver health ...
	I1217 01:56:50.373212    6652 system_pods.go:43] waiting for kube-system pods to appear ...
	I1217 01:56:50.383881    6652 system_pods.go:59] 8 kube-system pods found
	I1217 01:56:50.383935    6652 system_pods.go:61] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.383972    6652 system_pods.go:61] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.383972    6652 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.383972    6652 system_pods.go:61] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.383972    6652 system_pods.go:61] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.383972    6652 system_pods.go:74] duration metric: took 10.76ms to wait for pod list to return data ...
	I1217 01:56:50.383972    6652 default_sa.go:34] waiting for default service account to be created ...
	I1217 01:56:50.472293    6652 default_sa.go:45] found service account: "default"
	I1217 01:56:50.472293    6652 default_sa.go:55] duration metric: took 88.3195ms for default service account to be created ...
	I1217 01:56:50.472293    6652 system_pods.go:116] waiting for k8s-apps to be running ...
	I1217 01:56:50.550966    6652 system_pods.go:86] 8 kube-system pods found
	I1217 01:56:50.550966    6652 system_pods.go:89] "coredns-66bc5c9577-mq7nr" [e3b40fbf-c8cf-4da5-a3e1-544cdb2cf9d8] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1217 01:56:50.551963    6652 system_pods.go:89] "etcd-default-k8s-diff-port-278200" [a72b7231-603f-4f60-9395-a7f842c86452] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-278200" [8dc29fce-1059-4acc-8a09-64f9eed9a84a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-278200" [916662d2-3e76-4bf9-9b11-b4c5cd906d1c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-proxy-hp6zw" [8399cddb-2b50-4401-adbb-83631e5b1a3f] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1217 01:56:50.551963    6652 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-278200" [01597b66-6476-4b34-9010-67c8fa5ba2b7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1217 01:56:50.551963    6652 system_pods.go:89] "metrics-server-746fcd58dc-zg2gc" [1347d3c4-9a8a-4e8c-9c00-d649fa23179f] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1217 01:56:50.551963    6652 system_pods.go:89] "storage-provisioner" [89564fde-7887-446a-bab4-f662064c9fde] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1217 01:56:50.551963    6652 system_pods.go:126] duration metric: took 79.6691ms to wait for k8s-apps to be running ...
	I1217 01:56:50.551963    6652 system_svc.go:44] waiting for kubelet service to be running ....
	I1217 01:56:50.558963    6652 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:56:50.647965    6652 system_svc.go:56] duration metric: took 96.0006ms WaitForService to wait for kubelet
	I1217 01:56:50.647965    6652 kubeadm.go:587] duration metric: took 11.8438008s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 01:56:50.647965    6652 node_conditions.go:102] verifying NodePressure condition ...
	I1217 01:56:50.655959    6652 node_conditions.go:122] node storage ephemeral capacity is 1055762868Ki
	I1217 01:56:50.655959    6652 node_conditions.go:123] node cpu capacity is 16
	I1217 01:56:50.655959    6652 node_conditions.go:105] duration metric: took 7.9936ms to run NodePressure ...
	I1217 01:56:50.655959    6652 start.go:242] waiting for startup goroutines ...
	I1217 01:56:50.655959    6652 start.go:247] waiting for cluster config update ...
	I1217 01:56:50.655959    6652 start.go:256] writing updated cluster config ...
	I1217 01:56:50.662974    6652 ssh_runner.go:195] Run: rm -f paused
	I1217 01:56:50.670974    6652 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:56:50.679961    6652 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	W1217 01:56:52.758113    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:56:51.685685   10580 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1217 01:56:51.686059   10580 start.go:159] libmachine.API.Create for "newest-cni-383500" (driver="docker")
	I1217 01:56:51.686127   10580 client.go:173] LocalClient.Create starting
	I1217 01:56:51.686740   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.686997   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.687153   10580 main.go:143] libmachine: Reading certificate data from C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Decoding PEM data...
	I1217 01:56:51.687320   10580 main.go:143] libmachine: Parsing certificate...
	I1217 01:56:51.691438   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1217 01:56:51.737765   10580 cli_runner.go:211] docker network inspect newest-cni-383500 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1217 01:56:51.740755   10580 network_create.go:284] running [docker network inspect newest-cni-383500] to gather additional debugging logs...
	I1217 01:56:51.740755   10580 cli_runner.go:164] Run: docker network inspect newest-cni-383500
	W1217 01:56:51.801443   10580 cli_runner.go:211] docker network inspect newest-cni-383500 returned with exit code 1
	I1217 01:56:51.802437   10580 network_create.go:287] error running [docker network inspect newest-cni-383500]: docker network inspect newest-cni-383500: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-383500 not found
	I1217 01:56:51.802437   10580 network_create.go:289] output of [docker network inspect newest-cni-383500]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-383500 not found
	
	** /stderr **
	I1217 01:56:51.804999   10580 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1217 01:56:51.880941   10580 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.896006   10580 network.go:209] skipping subnet 192.168.58.0/24 that is reserved: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:51.908781   10580 network.go:206] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000faab70}
	I1217 01:56:51.908781   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I1217 01:56:51.911893   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	W1217 01:56:51.964261   10580 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500 returned with exit code 1
	W1217 01:56:51.964261   10580 network_create.go:149] failed to create docker network newest-cni-383500 192.168.67.0/24 with gateway 192.168.67.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500: exit status 1
	stdout:
	
	stderr:
	Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
	W1217 01:56:51.964261   10580 network_create.go:116] failed to create docker network newest-cni-383500 192.168.67.0/24, will retry: subnet is taken
	I1217 01:56:51.989641   10580 network.go:209] skipping subnet 192.168.67.0/24 that is reserved: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
	I1217 01:56:52.003768   10580 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000f5b5c0}
	I1217 01:56:52.003768   10580 network_create.go:124] attempt to create docker network newest-cni-383500 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1217 01:56:52.007075   10580 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-383500 newest-cni-383500
	I1217 01:56:52.149371   10580 network_create.go:108] docker network newest-cni-383500 192.168.76.0/24 created
	I1217 01:56:52.149371   10580 kic.go:121] calculated static IP "192.168.76.2" for the "newest-cni-383500" container
	I1217 01:56:52.161020   10580 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1217 01:56:52.221477   10580 cli_runner.go:164] Run: docker volume create newest-cni-383500 --label name.minikube.sigs.k8s.io=newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true
	I1217 01:56:52.277863   10580 oci.go:103] Successfully created a docker volume newest-cni-383500
	I1217 01:56:52.281622   10580 cli_runner.go:164] Run: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib
	I1217 01:56:53.597934   10580 cli_runner.go:217] Completed: docker run --rm --name newest-cni-383500-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --entrypoint /usr/bin/test -v newest-cni-383500:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -d /var/lib: (1.3162925s)
	I1217 01:56:53.597934   10580 oci.go:107] Successfully prepared a docker volume newest-cni-383500
	I1217 01:56:53.597934   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:56:53.597934   10580 kic.go:194] Starting extracting preloaded images to volume ...
	I1217 01:56:53.602121   10580 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir
	W1217 01:56:55.164284   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:57.657496   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	W1217 01:56:55.197325    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:57.691480    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:56:59.691833    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:00.414359   10700 pod_ready.go:104] pod "coredns-66bc5c9577-rkqgn" is not "Ready", error: <nil>
	I1217 01:57:01.221784   10700 pod_ready.go:94] pod "coredns-66bc5c9577-rkqgn" is "Ready"
	I1217 01:57:01.221832   10700 pod_ready.go:86] duration metric: took 31.57611s for pod "coredns-66bc5c9577-rkqgn" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.231015   10700 pod_ready.go:83] waiting for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.305989   10700 pod_ready.go:94] pod "etcd-embed-certs-653800" is "Ready"
	I1217 01:57:01.306038   10700 pod_ready.go:86] duration metric: took 74.9721ms for pod "etcd-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.362260   10700 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.373797   10700 pod_ready.go:94] pod "kube-apiserver-embed-certs-653800" is "Ready"
	I1217 01:57:01.373797   10700 pod_ready.go:86] duration metric: took 11.4721ms for pod "kube-apiserver-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.379508   10700 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:01.421736   10700 pod_ready.go:94] pod "kube-controller-manager-embed-certs-653800" is "Ready"
	I1217 01:57:01.421778   10700 pod_ready.go:86] duration metric: took 42.2686ms for pod "kube-controller-manager-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.549272   10700 pod_ready.go:83] waiting for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.831507   10700 pod_ready.go:94] pod "kube-proxy-tnkvj" is "Ready"
	I1217 01:57:02.832053   10700 pod_ready.go:86] duration metric: took 282.7765ms for pod "kube-proxy-tnkvj" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.837864   10700 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850194   10700 pod_ready.go:94] pod "kube-scheduler-embed-certs-653800" is "Ready"
	I1217 01:57:02.850247   10700 pod_ready.go:86] duration metric: took 12.3828ms for pod "kube-scheduler-embed-certs-653800" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:02.850295   10700 pod_ready.go:40] duration metric: took 33.2150881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:02.959538   10700 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:03.043739   10700 out.go:179] * Done! kubectl is now configured to use "embed-certs-653800" cluster and "default" namespace by default
	W1217 01:57:01.693305    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:04.195654    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:06.294817    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:08.700814    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:10.483352   10580 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-383500:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 -I lz4 -xf /preloaded.tar -C /extractDir: (16.8803148s)
	I1217 01:57:10.483443   10580 kic.go:203] duration metric: took 16.8852234s to extract preloaded images to volume ...
	I1217 01:57:10.489300   10580 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 01:57:10.753192   10580 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:93 OomKillDisable:true NGoroutines:95 SystemTime:2025-12-17 01:57:10.732557974 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 01:57:10.757222   10580 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	W1217 01:57:11.205059    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:13.689668    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:11.047255   10580 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-383500 --name newest-cni-383500 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-383500 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-383500 --network newest-cni-383500 --ip 192.168.76.2 --volume newest-cni-383500:/var --security-opt apparmor=unconfined --memory=3072mb --memory-swap=3072mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78
	I1217 01:57:11.789740   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Running}}
	I1217 01:57:11.849518   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:11.908509   10580 cli_runner.go:164] Run: docker exec newest-cni-383500 stat /var/lib/dpkg/alternatives/iptables
	I1217 01:57:12.021676   10580 oci.go:144] the created container "newest-cni-383500" has a running status.
	I1217 01:57:12.021676   10580 kic.go:225] Creating ssh key for kic: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:12.131609   10580 kic_runner.go:191] docker (temp): C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1217 01:57:12.208714   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:12.272788   10580 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1217 01:57:12.273496   10580 kic_runner.go:114] Args: [docker exec --privileged newest-cni-383500 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1217 01:57:12.387830   10580 kic.go:265] ensuring only current user has permissions to key file located at : C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa...
	I1217 01:57:14.496810   10580 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 01:57:14.552924   10580 machine.go:94] provisionDockerMachine start ...
	I1217 01:57:14.556597   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.614668   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.628589   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.628589   10580 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 01:57:14.803670   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:14.803752   10580 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 01:57:14.806966   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:14.872659   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:14.873288   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:14.873288   10580 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 01:57:15.070847   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 01:57:15.076754   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.138180   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.138558   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.138558   10580 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 01:57:15.322611   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 01:57:15.322611   10580 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 01:57:15.322611   10580 ubuntu.go:190] setting up certificates
	I1217 01:57:15.322611   10580 provision.go:84] configureAuth start
	I1217 01:57:15.327543   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:15.379974   10580 provision.go:143] copyHostCerts
	I1217 01:57:15.380366   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 01:57:15.380414   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 01:57:15.380832   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 01:57:15.382184   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 01:57:15.382226   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 01:57:15.382581   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 01:57:15.383683   10580 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 01:57:15.383736   10580 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 01:57:15.384159   10580 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 01:57:15.384159   10580 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 01:57:15.508571   10580 provision.go:177] copyRemoteCerts
	I1217 01:57:15.512616   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 01:57:15.515422   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:15.573004   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:15.707286   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 01:57:15.746639   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 01:57:15.775638   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 01:57:15.812045   10580 provision.go:87] duration metric: took 488.4307ms to configureAuth
	I1217 01:57:15.812045   10580 ubuntu.go:206] setting minikube options for container-runtime
	I1217 01:57:15.812045   10580 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 01:57:15.815050   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	W1217 01:57:15.691769    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:17.697151    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:15.867044   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:15.867044   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:15.867044   10580 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 01:57:16.041586   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 01:57:16.041586   10580 ubuntu.go:71] root file system type: overlay
	I1217 01:57:16.041586   10580 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 01:57:16.045689   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.104012   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.104611   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.104703   10580 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 01:57:16.297193   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 01:57:16.300844   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:16.360905   10580 main.go:143] libmachine: Using SSH client type: native
	I1217 01:57:16.361498   10580 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63415 <nil> <nil>}
	I1217 01:57:16.361540   10580 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 01:57:18.042542   10580 main.go:143] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2025-12-12 14:48:15.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2025-12-17 01:57:16.287130539 +0000
	@@ -9,23 +9,34 @@
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutStartSec=0
	-RestartSec=2
	 Restart=always
	 
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	+
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	+LimitNOFILE=infinity
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I1217 01:57:18.042542   10580 machine.go:97] duration metric: took 3.4895662s to provisionDockerMachine
	I1217 01:57:18.042542   10580 client.go:176] duration metric: took 26.3559894s to LocalClient.Create
	I1217 01:57:18.042542   10580 start.go:167] duration metric: took 26.3560942s to libmachine.API.Create "newest-cni-383500"
	I1217 01:57:18.042542   10580 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 01:57:18.042542   10580 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 01:57:18.050002   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 01:57:18.053976   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.112173   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.256941   10580 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 01:57:18.268729   10580 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 01:57:18.268729   10580 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 01:57:18.268729   10580 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 01:57:18.269469   10580 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 01:57:18.273808   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 01:57:18.289831   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 01:57:18.317384   10580 start.go:296] duration metric: took 274.8381ms for postStartSetup
	I1217 01:57:18.322385   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.369389   10580 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 01:57:18.375387   10580 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:57:18.381078   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.432604   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.561382   10580 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 01:57:18.571573   10580 start.go:128] duration metric: took 26.8885332s to createHost
	I1217 01:57:18.571573   10580 start.go:83] releasing machines lock for "newest-cni-383500", held for 26.8886481s
	I1217 01:57:18.575096   10580 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 01:57:18.630669   10580 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 01:57:18.634666   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.635666   10580 ssh_runner.go:195] Run: cat /version.json
	I1217 01:57:18.639677   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 01:57:18.695664   10580 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63415 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 01:57:18.859792   10580 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 01:57:18.877228   10580 ssh_runner.go:195] Run: systemctl --version
	I1217 01:57:18.892439   10580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 01:57:18.900947   10580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 01:57:18.905555   10580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 01:57:18.954841   10580 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1217 01:57:18.954952   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:18.955015   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:18.955015   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:18.991199   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	W1217 01:57:19.008171   10580 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 01:57:19.008230   10580 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 01:57:19.013119   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 01:57:19.028717   10580 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 01:57:19.032858   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 01:57:19.052914   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.072904   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 01:57:19.095550   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 01:57:19.115854   10580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 01:57:19.132848   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 01:57:19.151846   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 01:57:19.172853   10580 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 01:57:19.193907   10580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 01:57:19.210892   10580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 01:57:19.227892   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:19.399536   10580 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 01:57:19.601453   10580 start.go:496] detecting cgroup driver to use...
	I1217 01:57:19.601453   10580 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 01:57:19.605450   10580 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 01:57:19.629461   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.656299   10580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 01:57:19.736745   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 01:57:19.764285   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 01:57:19.789001   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 01:57:19.815453   10580 ssh_runner.go:195] Run: which cri-dockerd
	I1217 01:57:19.827238   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 01:57:19.842026   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 01:57:19.874597   10580 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 01:57:20.041348   10580 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 01:57:20.226962   10580 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 01:57:20.226962   10580 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 01:57:20.254551   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 01:57:20.278555   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:20.468211   10580 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 01:57:21.513591   10580 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1.0453647s)
	I1217 01:57:21.520768   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 01:57:21.544117   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 01:57:21.578618   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:21.602252   10580 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 01:57:21.754251   10580 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 01:57:21.925790   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.049631   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 01:57:22.080439   10580 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 01:57:22.102178   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:22.247555   10580 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 01:57:22.356045   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 01:57:22.374818   10580 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 01:57:22.380720   10580 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 01:57:22.388747   10580 start.go:564] Will wait 60s for crictl version
	I1217 01:57:22.393402   10580 ssh_runner.go:195] Run: which crictl
	I1217 01:57:22.405105   10580 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 01:57:22.456110   10580 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 01:57:22.460422   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.517812   10580 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 01:57:22.562431   10580 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 01:57:22.566477   10580 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 01:57:22.701109   10580 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 01:57:22.707802   10580 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 01:57:22.717558   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:22.737642   10580 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 01:57:22.798183   10580 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	W1217 01:57:20.222966    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	W1217 01:57:22.694494    6652 pod_ready.go:104] pod "coredns-66bc5c9577-mq7nr" is not "Ready", error: <nil>
	I1217 01:57:23.189475    6652 pod_ready.go:94] pod "coredns-66bc5c9577-mq7nr" is "Ready"
	I1217 01:57:23.189475    6652 pod_ready.go:86] duration metric: took 32.5090332s for pod "coredns-66bc5c9577-mq7nr" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.194104    6652 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.202184    6652 pod_ready.go:94] pod "etcd-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.202184    6652 pod_ready.go:86] duration metric: took 8.0443ms for pod "etcd-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.206828    6652 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.213978    6652 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.213978    6652 pod_ready.go:86] duration metric: took 7.1505ms for pod "kube-apiserver-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.217306    6652 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.387857    6652 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:23.387920    6652 pod_ready.go:86] duration metric: took 170.6119ms for pod "kube-controller-manager-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.587111    6652 pod_ready.go:83] waiting for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:23.985373    6652 pod_ready.go:94] pod "kube-proxy-hp6zw" is "Ready"
	I1217 01:57:23.986730    6652 pod_ready.go:86] duration metric: took 399.613ms for pod "kube-proxy-hp6zw" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.201566    6652 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586537    6652 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-278200" is "Ready"
	I1217 01:57:24.586586    6652 pod_ready.go:86] duration metric: took 385.0143ms for pod "kube-scheduler-default-k8s-diff-port-278200" in "kube-system" namespace to be "Ready" or be gone ...
	I1217 01:57:24.586640    6652 pod_ready.go:40] duration metric: took 33.9151651s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1217 01:57:24.687654    6652 start.go:625] kubectl: 1.34.3, cluster: 1.34.2 (minor skew: 0)
	I1217 01:57:25.088107    6652 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-278200" cluster and "default" namespace by default
	I1217 01:57:22.800238   10580 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimiz
ations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 01:57:22.800267   10580 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 01:57:22.804334   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.840199   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.840199   10580 docker.go:621] Images already preloaded, skipping extraction
	I1217 01:57:22.843860   10580 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 01:57:22.875886   10580 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 01:57:22.875953   10580 cache_images.go:86] Images are preloaded, skipping loading
	I1217 01:57:22.876007   10580 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 01:57:22.876138   10580 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 01:57:22.881452   10580 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 01:57:22.963596   10580 cni.go:84] Creating CNI manager for ""
	I1217 01:57:22.963596   10580 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 01:57:22.963596   10580 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 01:57:22.963596   10580 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 01:57:22.964766   10580 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 01:57:22.971170   10580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 01:57:22.988148   10580 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 01:57:22.993571   10580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 01:57:23.008239   10580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 01:57:23.168781   10580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 01:57:23.268253   10580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 01:57:23.292920   10580 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 01:57:23.298948   10580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 01:57:23.555705   10580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 01:57:23.774461   10580 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 01:57:23.797469   10580 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 01:57:23.797574   10580 certs.go:195] generating shared ca certs ...
	I1217 01:57:23.797612   10580 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 01:57:23.797983   10580 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 01:57:23.801985   10580 certs.go:257] generating profile certs ...
	I1217 01:57:23.801985   10580 certs.go:364] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 01:57:23.802608   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt with IP's: []
	I1217 01:57:23.893499   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt ...
	I1217 01:57:23.893499   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.crt: {Name:mk018179fa6276f140d3c484dc77b112ade6a239 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.894491   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key ...
	I1217 01:57:23.894491   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key: {Name:mkf03a928d0759f4e80338ae1a94ef05274842bd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.895493   10580 certs.go:364] generating signed profile cert for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 01:57:23.895493   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1217 01:57:23.940939   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 ...
	I1217 01:57:23.940939   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8: {Name:mk793887fd39b61b0148eb1aef73edce147dd7af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 ...
	I1217 01:57:23.941938   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8: {Name:mk75e8d1cb53d5e553bcfb51860f15346eec2f02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:23.941938   10580 certs.go:382] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt
	I1217 01:57:23.956750   10580 certs.go:386] copying C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key
	I1217 01:57:23.958193   10580 certs.go:364] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 01:57:23.958415   10580 crypto.go:68] Generating cert C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt with IP's: []
	I1217 01:57:24.067269   10580 crypto.go:156] Writing cert to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt ...
	I1217 01:57:24.067269   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt: {Name:mk21db782682ec857bcf614d6ee83e5820624361 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.068316   10580 crypto.go:164] Writing key to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key ...
	I1217 01:57:24.068316   10580 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key: {Name:mk4bcb88a5770958ea52d64f6df1b6838f8b5fc3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 01:57:24.097118   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 01:57:24.097649   10580 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 01:57:24.097791   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 01:57:24.098025   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 01:57:24.098812   10580 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 01:57:24.100115   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 01:57:24.135459   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 01:57:24.165011   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 01:57:24.192410   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 01:57:24.481059   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 01:57:25.003692   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 01:57:25.038428   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 01:57:25.065081   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 01:57:25.099226   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 01:57:25.144094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 01:57:25.174094   10580 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 01:57:25.210940   10580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 01:57:25.237951   10580 ssh_runner.go:195] Run: openssl version
	I1217 01:57:25.254946   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.276935   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 01:57:25.294948   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.302943   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.306934   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 01:57:25.370952   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.390944   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/41682.pem /etc/ssl/certs/3ec20f2e.0
	I1217 01:57:25.415186   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.434956   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 01:57:25.453960   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.460961   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.464957   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 01:57:25.515968   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 01:57:25.532957   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0
	I1217 01:57:25.547952   10580 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.565954   10580 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 01:57:25.583961   10580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.591966   10580 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.596965   10580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 01:57:25.654221   10580 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 01:57:25.671221   10580 ssh_runner.go:195] Run: sudo ln -fs /etc/ssl/certs/4168.pem /etc/ssl/certs/51391683.0
	I1217 01:57:25.688222   10580 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 01:57:25.696236   10580 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1217 01:57:25.696236   10580 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizati
ons:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 01:57:25.699225   10580 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 01:57:25.732231   10580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 01:57:25.750219   10580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1217 01:57:25.764216   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:57:25.768221   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:57:25.782223   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:57:25.782223   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 01:57:25.787226   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:57:25.811226   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:57:25.817308   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:57:25.846154   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:57:25.861155   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:57:25.865166   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:57:25.882164   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.894161   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:57:25.898177   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:57:25.916173   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:57:25.936694   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:57:25.940687   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:57:25.956687   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:57:26.100043   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:57:26.198370   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:57:26.302677   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 01:58:51.115615    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	I1217 01:58:51.115718    7596 kubeadm.go:319] 
	I1217 01:58:51.115916    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 01:58:51.121578    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 01:58:51.121578    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 01:58:51.122136    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 01:58:51.122290    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 01:58:51.122857    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 01:58:51.122917    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 01:58:51.123472    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 01:58:51.123993    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 01:58:51.124096    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 01:58:51.124691    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] OS: Linux
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 01:58:51.124779    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 01:58:51.125946    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 01:58:51.126099    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 01:58:51.126099    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 01:58:51.128573    7596 out.go:252]   - Generating certificates and keys ...
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 01:58:51.128573    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 01:58:51.129197    7596 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 01:58:51.129388    7596 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 01:58:51.129558    7596 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 01:58:51.129682    7596 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 01:58:51.129773    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 01:58:51.130781    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 01:58:51.130943    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 01:58:51.131040    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 01:58:51.131231    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 01:58:51.131356    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 01:58:51.131482    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 01:58:51.131482    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 01:58:51.133818    7596 out.go:252]   - Booting up control plane ...
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 01:58:51.133818    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 01:58:51.134777    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 01:58:51.135780    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 01:58:51.135780    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.002324195s
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 01:58:51.135780    7596 kubeadm.go:319] 
	I1217 01:58:51.135780    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 01:58:51.135780    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 01:58:51.136777    7596 kubeadm.go:319] 
	W1217 01:58:51.136777    7596 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost no-preload-184000] and IPs [192.168.94.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.002324195s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": context deadline exceeded
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 01:58:51.139887    7596 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 01:58:51.605403    7596 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:58:51.627327    7596 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 01:58:51.634266    7596 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 01:58:51.651778    7596 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 01:58:51.651778    7596 kubeadm.go:158] found existing configuration files:
	
	I1217 01:58:51.657261    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 01:58:51.670434    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 01:58:51.674365    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 01:58:51.692907    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 01:58:51.707259    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 01:58:51.711851    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 01:58:51.731617    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.746650    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 01:58:51.750583    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 01:58:51.769267    7596 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 01:58:51.784345    7596 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 01:58:51.789034    7596 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 01:58:51.805733    7596 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 01:58:51.926943    7596 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 01:58:52.006918    7596 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 01:58:52.107226    7596 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:01:27.963444   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:01:27.963444   10580 kubeadm.go:319] 
	I1217 02:01:27.963616   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:01:27.972023   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:01:27.973054   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:01:27.973281   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:01:27.973281   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:01:27.973281   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:01:27.973879   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:01:27.973979   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:01:27.974551   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:01:27.975176   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:01:27.975219   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:01:27.975817   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] OS: Linux
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:01:27.975876   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:01:27.976495   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:01:27.976518   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:01:27.977232   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:01:27.977413   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:01:27.977413   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:01:27.979976   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1217 02:01:27.980643   10580 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1217 02:01:27.981175   10580 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1217 02:01:27.981278   10580 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1217 02:01:27.981448   10580 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1217 02:01:27.982128   10580 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1217 02:01:27.982285   10580 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1217 02:01:27.982463   10580 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1217 02:01:27.982622   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:01:27.982783   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:01:27.983316   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:01:27.983431   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:01:27.985605   10580 out.go:252]   - Booting up control plane ...
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:01:27.985605   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:01:27.986216   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:01:27.986315   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:01:27.986315   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:01:27.987339   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000575784s
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987339   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:01:27.987339   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:01:27.987339   10580 kubeadm.go:319] 
	I1217 02:01:27.987913   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:01:27.987913   10580 kubeadm.go:319] 
	W1217 02:01:27.987913   10580 out.go:285] ! initialization failed, will try again: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-383500] and IPs [192.168.76.2 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000575784s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	I1217 02:01:27.992425   10580 ssh_runner.go:195] Run: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force"
	I1217 02:01:28.454931   10580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 02:01:28.474574   10580 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1217 02:01:28.479997   10580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1217 02:01:28.494933   10580 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1217 02:01:28.494933   10580 kubeadm.go:158] found existing configuration files:
	
	I1217 02:01:28.501352   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1217 02:01:28.516227   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1217 02:01:28.521874   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1217 02:01:28.540752   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1217 02:01:28.554535   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1217 02:01:28.559019   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1217 02:01:28.577479   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.592775   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1217 02:01:28.596757   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1217 02:01:28.614687   10580 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1217 02:01:28.629343   10580 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1217 02:01:28.633759   10580 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1217 02:01:28.653776   10580 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1217 02:01:28.777097   10580 kubeadm.go:319] 	[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
	I1217 02:01:28.860083   10580 kubeadm.go:319] 	[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
	I1217 02:01:28.960806   10580 kubeadm.go:319] 	[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1217 02:02:52.901103    7596 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:02:52.901187    7596 kubeadm.go:319] 
	I1217 02:02:52.901405    7596 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:02:52.906962    7596 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:02:52.907051    7596 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:02:52.907051    7596 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:02:52.907051    7596 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:02:52.907664    7596 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:02:52.907698    7596 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:02:52.908322    7596 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:02:52.908447    7596 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:02:52.908571    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:02:52.908730    7596 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:02:52.908849    7596 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:02:52.909000    7596 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:02:52.909067    7596 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:02:52.909731    7596 kubeadm.go:319] OS: Linux
	I1217 02:02:52.909731    7596 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:02:52.910342    7596 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:02:52.910393    7596 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:02:52.911109    7596 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:02:52.911252    7596 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:02:52.911252    7596 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:02:52.914099    7596 out.go:252]   - Generating certificates and keys ...
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:02:52.914227    7596 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:02:52.914806    7596 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:02:52.915391    7596 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:02:52.915391    7596 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:02:52.915926    7596 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:02:52.916016    7596 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:02:52.916016    7596 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:02:52.918827    7596 out.go:252]   - Booting up control plane ...
	I1217 02:02:52.918827    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:02:52.919840    7596 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:02:52.920875    7596 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:02:52.920875    7596 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.000516808s
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:02:52.920875    7596 kubeadm.go:319] 
	I1217 02:02:52.920875    7596 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:02:52.920875    7596 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:02:52.921883    7596 kubeadm.go:319] 
	I1217 02:02:52.921883    7596 kubeadm.go:403] duration metric: took 8m4.1597601s to StartCluster
	I1217 02:02:52.921883    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:02:52.925883    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:02:52.985042    7596 cri.go:89] found id: ""
	I1217 02:02:52.985042    7596 logs.go:282] 0 containers: []
	W1217 02:02:52.985042    7596 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:02:52.985042    7596 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:02:52.989497    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:02:53.035444    7596 cri.go:89] found id: ""
	I1217 02:02:53.035444    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.035444    7596 logs.go:284] No container was found matching "etcd"
	I1217 02:02:53.035444    7596 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:02:53.040633    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:02:53.090166    7596 cri.go:89] found id: ""
	I1217 02:02:53.090166    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.090166    7596 logs.go:284] No container was found matching "coredns"
	I1217 02:02:53.090166    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:02:53.095276    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:02:53.155229    7596 cri.go:89] found id: ""
	I1217 02:02:53.155292    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.155292    7596 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:02:53.155292    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:02:53.159579    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:02:53.201389    7596 cri.go:89] found id: ""
	I1217 02:02:53.201389    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.201389    7596 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:02:53.201389    7596 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:02:53.206627    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:02:53.251727    7596 cri.go:89] found id: ""
	I1217 02:02:53.251807    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.251807    7596 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:02:53.251807    7596 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:02:53.255868    7596 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:02:53.296927    7596 cri.go:89] found id: ""
	I1217 02:02:53.297002    7596 logs.go:282] 0 containers: []
	W1217 02:02:53.297002    7596 logs.go:284] No container was found matching "kindnet"
	I1217 02:02:53.297002    7596 logs.go:123] Gathering logs for kubelet ...
	I1217 02:02:53.297002    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:02:53.362489    7596 logs.go:123] Gathering logs for dmesg ...
	I1217 02:02:53.362489    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:02:53.402379    7596 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:02:53.402379    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:02:53.486459    7596 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:02:53.475461   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.476269   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.480737   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.482819   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:02:53.484040   10808 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:02:53.486459    7596 logs.go:123] Gathering logs for Docker ...
	I1217 02:02:53.486459    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:02:53.519898    7596 logs.go:123] Gathering logs for container status ...
	I1217 02:02:53.519898    7596 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:02:53.571631    7596 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:02:53.571705    7596 out.go:285] * 
	W1217 02:02:53.571763    7596 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.571763    7596 out.go:285] * 
	W1217 02:02:53.573684    7596 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:02:53.577599    7596 out.go:203] 
	W1217 02:02:53.580937    7596 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.000516808s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:02:53.580937    7596 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:02:53.580937    7596 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:02:53.584112    7596 out.go:203] 
	
	
	==> Docker <==
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638787318Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638875828Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638886629Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638892529Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638897830Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638925533Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.638969938Z" level=info msg="Initializing buildkit"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.814190912Z" level=info msg="Completed buildkit initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834145684Z" level=info msg="Daemon has completed initialization"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834353706Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834360607Z" level=info msg="API listen on [::]:2376"
	Dec 17 01:54:11 no-preload-184000 dockerd[1168]: time="2025-12-17T01:54:11.834438816Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 01:54:11 no-preload-184000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Loaded network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 01:54:12 no-preload-184000 cri-dockerd[1458]: time="2025-12-17T01:54:12Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 01:54:12 no-preload-184000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:04:59.044451   13729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:04:59.045240   13729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:04:59.048461   13729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:04:59.049705   13729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:04:59.051484   13729 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +6.736198] tmpfs: Unknown parameter 'noswap'
	[  +0.306826] CPU: 13 PID: 440898 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000005] RIP: 0033:0x7f86f2041b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f86f2041af6.
	[  +0.000001] RSP: 002b:00007ffdf29d7630 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +1.037447] CPU: 4 PID: 441085 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fed1ac73b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7fed1ac73af6.
	[  +0.000001] RSP: 002b:00007fff679e5600 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[ +20.473571] tmpfs: Unknown parameter 'noswap'
	
	
	==> kernel <==
	 02:04:59 up  2:24,  0 user,  load average: 0.43, 1.81, 3.15
	Linux no-preload-184000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:04:56 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:04:56 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 17 02:04:56 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:56 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:56 no-preload-184000 kubelet[13564]: E1217 02:04:56.850250   13564 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:04:56 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:04:56 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:04:57 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 486.
	Dec 17 02:04:57 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:57 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:57 no-preload-184000 kubelet[13588]: E1217 02:04:57.632027   13588 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:04:57 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:04:57 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:04:58 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 17 02:04:58 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:58 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:58 no-preload-184000 kubelet[13616]: E1217 02:04:58.372066   13616 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:04:58 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:04:58 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:04:58 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 17 02:04:58 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:59 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:04:59 no-preload-184000 kubelet[13737]: E1217 02:04:59.109543   13737 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:04:59 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:04:59 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 6 (598.2382ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:05:00.103906     768 status.go:458] kubeconfig endpoint: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (117.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (378.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0
E1217 02:05:06.459702    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:05:13.034332    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-044000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:05:22.397728    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 80 (6m14.3504535s)

                                                
                                                
-- stdout --
	* [no-preload-184000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "no-preload-184000" primary control-plane node in "no-preload-184000" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	  - Using image registry.k8s.io/echoserver:1.4
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 02:05:02.629645    6768 out.go:360] Setting OutFile to fd 852 ...
	I1217 02:05:02.671051    6768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:02.671051    6768 out.go:374] Setting ErrFile to fd 1172...
	I1217 02:05:02.671051    6768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:02.687471    6768 out.go:368] Setting JSON to false
	I1217 02:05:02.690746    6768 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8691,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:05:02.690781    6768 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:05:02.694017    6768 out.go:179] * [no-preload-184000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:05:02.699245    6768 notify.go:221] Checking for updates...
	I1217 02:05:02.701769    6768 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:02.703938    6768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:02.706929    6768 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:05:02.709501    6768 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:02.712185    6768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:02.715207    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:02.716501    6768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:02.837461    6768 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:05:02.842258    6768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:03.079348    6768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:05:03.054281062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:05:03.087094    6768 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:03.091220    6768 start.go:309] selected driver: docker
	I1217 02:05:03.091220    6768 start.go:927] validating driver "docker" against &{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:03.091220    6768 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:03.188409    6768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:03.434313    6768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:05:03.415494177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:05:03.434313    6768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:05:03.434313    6768 cni.go:84] Creating CNI manager for ""
	I1217 02:05:03.434313    6768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:05:03.434313    6768 start.go:353] cluster config:
	{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:03.439310    6768 out.go:179] * Starting "no-preload-184000" primary control-plane node in "no-preload-184000" cluster
	I1217 02:05:03.441310    6768 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:05:03.443310    6768 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:03.448311    6768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:03.448311    6768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:05:03.448311    6768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1217 02:05:03.545905    6768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:03.545905    6768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:03.545905    6768 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:03.545905    6768 start.go:360] acquireMachinesLock for no-preload-184000: {Name:mk58fd592c3ebf84a2801325b861ffe90e12015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:03.545905    6768 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-184000"
	I1217 02:05:03.546921    6768 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:03.546921    6768 fix.go:54] fixHost starting: 
	I1217 02:05:03.557903    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:03.760117    6768 fix.go:112] recreateIfNeeded on no-preload-184000: state=Stopped err=<nil>
	W1217 02:05:03.760117    6768 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:05:03.764113    6768 out.go:252] * Restarting existing docker container for "no-preload-184000" ...
	I1217 02:05:03.767110    6768 cli_runner.go:164] Run: docker start no-preload-184000
	I1217 02:05:05.253549    6768 cli_runner.go:217] Completed: docker start no-preload-184000: (1.4864164s)
	I1217 02:05:05.260543    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:05.357919    6768 kic.go:430] container "no-preload-184000" state is running.
	I1217 02:05:05.364922    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:05.444478    6768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 02:05:05.447474    6768 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:05.453480    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:05.545241    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:05.545241    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:05.545241    6768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:05.549583    6768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:06.370661    6768 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.370661    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1217 02:05:06.371228    6768 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.9228733s
	I1217 02:05:06.371228    6768 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1217 02:05:06.375872    6768 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.375872    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1217 02:05:06.376401    6768 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.9275166s
	I1217 02:05:06.376463    6768 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 02:05:06.376989    6768 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.377073    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1217 02:05:06.377073    6768 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9287184s
	I1217 02:05:06.377073    6768 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1217 02:05:06.397758    6768 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.397758    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1217 02:05:06.397758    6768 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9494026s
	I1217 02:05:06.397758    6768 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1217 02:05:06.401745    6768 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.401745    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1217 02:05:06.401745    6768 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9533893s
	I1217 02:05:06.401745    6768 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 02:05:06.434118    6768 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.434118    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1217 02:05:06.434118    6768 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.9857618s
	I1217 02:05:06.436060    6768 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1217 02:05:06.469702    6768 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.470703    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1217 02:05:06.470703    6768 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.022347s
	I1217 02:05:06.470703    6768 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1217 02:05:06.521227    6768 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.521321    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1217 02:05:06.521321    6768 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.0729641s
	I1217 02:05:06.521321    6768 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 02:05:06.521321    6768 cache.go:87] Successfully saved all images to host disk.
	I1217 02:05:08.728111    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 02:05:08.728111    6768 ubuntu.go:182] provisioning hostname "no-preload-184000"
	I1217 02:05:08.732574    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:08.788471    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:08.788517    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:08.788517    6768 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-184000 && echo "no-preload-184000" | sudo tee /etc/hostname
	I1217 02:05:08.984320    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 02:05:08.988540    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.045241    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:09.046042    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:09.046073    6768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184000/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:09.239223    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:09.239223    6768 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:05:09.239223    6768 ubuntu.go:190] setting up certificates
	I1217 02:05:09.239223    6768 provision.go:84] configureAuth start
	I1217 02:05:09.242936    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:09.300521    6768 provision.go:143] copyHostCerts
	I1217 02:05:09.300924    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:05:09.300924    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:05:09.301449    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:05:09.301878    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:05:09.301878    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:05:09.302546    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:05:09.303134    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:05:09.303134    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:05:09.303134    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:05:09.303843    6768 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-184000 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-184000]
	I1217 02:05:09.513127    6768 provision.go:177] copyRemoteCerts
	I1217 02:05:09.517075    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:09.519665    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.573516    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:09.696089    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:05:09.723663    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:09.749598    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:09.779713    6768 provision.go:87] duration metric: took 540.4619ms to configureAuth
	I1217 02:05:09.779730    6768 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:09.779917    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:09.784013    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.841680    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:09.841680    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:09.841680    6768 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:05:10.010881    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:05:10.010926    6768 ubuntu.go:71] root file system type: overlay
	I1217 02:05:10.011054    6768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:05:10.014899    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.071419    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:10.071649    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:10.071649    6768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:05:10.253657    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:05:10.257912    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.314224    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:10.314288    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:10.314288    6768 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:05:10.496294    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:10.496294    6768 machine.go:97] duration metric: took 5.0487445s to provisionDockerMachine
	I1217 02:05:10.496294    6768 start.go:293] postStartSetup for "no-preload-184000" (driver="docker")
	I1217 02:05:10.496294    6768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:10.501160    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:10.504159    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.558430    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:10.698125    6768 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:10.706351    6768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:10.706403    6768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:10.706403    6768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:05:10.706403    6768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:05:10.707067    6768 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:05:10.711519    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:10.725151    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:05:10.754903    6768 start.go:296] duration metric: took 258.6046ms for postStartSetup
	I1217 02:05:10.759061    6768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:10.762269    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.816597    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:10.943522    6768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:10.958658    6768 fix.go:56] duration metric: took 7.411626s for fixHost
	I1217 02:05:10.958658    6768 start.go:83] releasing machines lock for "no-preload-184000", held for 7.4126419s
	I1217 02:05:10.962906    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:11.017406    6768 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:05:11.021445    6768 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:11.021510    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:11.024650    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:11.076963    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:11.082042    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	W1217 02:05:11.198310    6768 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:05:11.210947    6768 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:11.226813    6768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:11.235667    6768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:11.242573    6768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:11.255007    6768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:11.255007    6768 start.go:496] detecting cgroup driver to use...
	I1217 02:05:11.255007    6768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:11.256009    6768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:11.283766    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:11.303122    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:11.317795    6768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:11.321726    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:11.340924    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 02:05:11.357913    6768 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:05:11.357979    6768 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:05:11.359375    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:11.377107    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:11.395476    6768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:11.418432    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:11.437643    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:11.458621    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:11.477313    6768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:11.495090    6768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:11.513809    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:11.664976    6768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:11.829322    6768 start.go:496] detecting cgroup driver to use...
	I1217 02:05:11.829433    6768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:11.835895    6768 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:05:11.860815    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:05:11.883615    6768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:05:11.960567    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:05:11.983346    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:12.002889    6768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:12.032515    6768 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:05:12.044249    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:05:12.056817    6768 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:05:12.080834    6768 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:05:12.249437    6768 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:05:12.397968    6768 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:05:12.397968    6768 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:05:12.425594    6768 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:05:12.447409    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:12.604225    6768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:05:13.440560    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:13.466105    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:05:13.489994    6768 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:05:13.514704    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:05:13.536605    6768 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:05:13.693215    6768 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:05:13.846670    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:14.004258    6768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 02:05:14.030193    6768 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:05:14.055627    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:14.209153    6768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:05:14.322039    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:05:14.339530    6768 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:05:14.345129    6768 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:05:14.353653    6768 start.go:564] Will wait 60s for crictl version
	I1217 02:05:14.357665    6768 ssh_runner.go:195] Run: which crictl
	I1217 02:05:14.368483    6768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:14.413189    6768 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:05:14.417273    6768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:05:14.462617    6768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:05:14.502904    6768 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:05:14.506033    6768 cli_runner.go:164] Run: docker exec -t no-preload-184000 dig +short host.docker.internal
	I1217 02:05:14.646991    6768 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:05:14.651689    6768 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:05:14.659909    6768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:14.680414    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:14.733079    6768 kubeadm.go:884] updating cluster {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:14.734079    6768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:05:14.737079    6768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:05:14.767963    6768 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:05:14.767963    6768 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:14.767963    6768 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:05:14.768480    6768 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:14.771542    6768 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:05:14.846616    6768 cni.go:84] Creating CNI manager for ""
	I1217 02:05:14.846636    6768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:05:14.846636    6768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:05:14.846636    6768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184000 NodeName:no-preload-184000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:14.846636    6768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-184000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:14.851632    6768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:14.863585    6768 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:14.868130    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:14.879683    6768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:05:14.899726    6768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:14.919991    6768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 02:05:14.944949    6768 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:14.952431    6768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:14.972008    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:15.116248    6768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:15.140002    6768 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000 for IP: 192.168.94.2
	I1217 02:05:15.140002    6768 certs.go:195] generating shared ca certs ...
	I1217 02:05:15.140002    6768 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:15.140318    6768 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:05:15.140318    6768 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:05:15.140951    6768 certs.go:257] generating profile certs ...
	I1217 02:05:15.141475    6768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.key
	I1217 02:05:15.141776    6768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key.d162c569
	I1217 02:05:15.141823    6768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key
	I1217 02:05:15.142712    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:05:15.142929    6768 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:15.142993    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:05:15.143196    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:05:15.143459    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:05:15.143743    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:05:15.144134    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:05:15.145445    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:15.174639    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:05:15.206543    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:15.237390    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:05:15.269725    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:15.299081    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:15.331970    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:15.364258    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:05:15.394880    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:05:15.424665    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:15.454305    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:05:15.482694    6768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:15.505956    6768 ssh_runner.go:195] Run: openssl version
	I1217 02:05:15.520857    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.538884    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:05:15.556769    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.565231    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.569694    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.618090    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:15.636651    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.657687    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:15.678656    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.686438    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.690381    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.738620    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:15.756906    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.776662    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:05:15.794117    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.801453    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.805697    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.853109    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:15.871938    6768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:15.885136    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:15.931869    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:15.978751    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:16.028376    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:16.079257    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:16.133289    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:16.177187    6768 kubeadm.go:401] StartCluster: {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:16.181577    6768 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:05:16.216215    6768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:16.228229    6768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:16.228229    6768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:16.233407    6768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:16.246099    6768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:16.251775    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.304124    6768 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:16.305294    6768 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-184000" cluster setting kubeconfig missing "no-preload-184000" context setting]
	I1217 02:05:16.305850    6768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.326797    6768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:16.342507    6768 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:05:16.342507    6768 kubeadm.go:602] duration metric: took 114.2766ms to restartPrimaryControlPlane
	I1217 02:05:16.342507    6768 kubeadm.go:403] duration metric: took 165.3768ms to StartCluster
	I1217 02:05:16.342507    6768 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.342507    6768 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:16.343620    6768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.344231    6768 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:05:16.344231    6768 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:16.344231    6768 addons.go:70] Setting storage-provisioner=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:239] Setting addon storage-provisioner=true in "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:70] Setting dashboard=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:16.344231    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.344231    6768 addons.go:70] Setting default-storageclass=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:239] Setting addon dashboard=true in "no-preload-184000"
	W1217 02:05:16.344929    6768 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:16.344929    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.347844    6768 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:16.354044    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.354121    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.355814    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:16.357052    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.409696    6768 addons.go:239] Setting addon default-storageclass=true in "no-preload-184000"
	I1217 02:05:16.409696    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.410688    6768 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:16.412689    6768 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:16.412689    6768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:16.416693    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.417698    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.423696    6768 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:16.425691    6768 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:05:16.428703    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:16.428703    6768 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:16.431694    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.467691    6768 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:16.468689    6768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:16.469695    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.471696    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.482691    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.518691    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.521691    6768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:16.604232    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:16.609620    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:16.609620    6768 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:16.632701    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:16.632701    6768 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:16.648900    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:16.655841    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:16.655841    6768 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:16.700825    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:16.700825    6768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:16.727124    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:16.728137    6768 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:16.747122    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:16.747167    6768 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:16.768592    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:16.768592    6768 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:05:16.800138    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.800273    6768 retry.go:31] will retry after 331.277361ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.806289    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	W1217 02:05:16.807169    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.807169    6768 retry.go:31] will retry after 367.14462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.821991    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:16.821991    6768 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:16.842976    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:16.842976    6768 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:16.864982    6768 node_ready.go:35] waiting up to 6m0s for node "no-preload-184000" to be "Ready" ...
	I1217 02:05:16.867979    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:16.963061    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.963061    6768 retry.go:31] will retry after 179.721934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.138499    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:17.147072    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:17.178163    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:17.232301    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.232367    6768 retry.go:31] will retry after 261.645604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.232463    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.232532    6768 retry.go:31] will retry after 358.922489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.264584    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.264642    6768 retry.go:31] will retry after 293.195494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.499020    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:17.564644    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:17.598253    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:17.609802    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.609802    6768 retry.go:31] will retry after 356.11648ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.728986    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.728986    6768 retry.go:31] will retry after 414.908289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.728986    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.728986    6768 retry.go:31] will retry after 471.765196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.972892    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:18.048428    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.048428    6768 retry.go:31] will retry after 848.614748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.149277    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:18.205928    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:18.270282    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.270282    6768 retry.go:31] will retry after 717.444443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:18.309651    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.309651    6768 retry.go:31] will retry after 981.836066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.901981    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:18.981321    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.981857    6768 retry.go:31] will retry after 1.188790069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.992863    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:19.074677    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.074677    6768 retry.go:31] will retry after 947.510236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.297489    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:19.377867    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.377937    6768 retry.go:31] will retry after 1.104512362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.028161    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:20.102126    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.102126    6768 retry.go:31] will retry after 2.018338834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.175978    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:20.253210    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.253210    6768 retry.go:31] will retry after 2.536835686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.487984    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:20.611020    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.611556    6768 retry.go:31] will retry after 1.621989786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.126652    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.202802    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.202802    6768 retry.go:31] will retry after 2.213473046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.239657    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.319492    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.319565    6768 retry.go:31] will retry after 2.644500815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.794504    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:22.901867    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.901867    6768 retry.go:31] will retry after 2.159892203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.422186    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.505078    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.505078    6768 retry.go:31] will retry after 5.38992916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.969459    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.066905    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.098830    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.098830    6768 retry.go:31] will retry after 2.819506289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:25.172740    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.172777    6768 retry.go:31] will retry after 5.817482434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:26.902270    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:27.923285    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:28.002844    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:28.002844    6768 retry.go:31] will retry after 5.747361639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:29.900036    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:29.991553    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:29.991553    6768 retry.go:31] will retry after 9.429682843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.993971    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:31.105446    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.105446    6768 retry.go:31] will retry after 5.178420591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:33.754429    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:33.845352    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:33.845402    6768 retry.go:31] will retry after 9.642479435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.288994    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:36.371093    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.371618    6768 retry.go:31] will retry after 14.211846335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:36.936896    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:39.427367    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:39.502910    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:39.503030    6768 retry.go:31] will retry after 10.108696058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:43.493020    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:43.580923    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:43.580923    6768 retry.go:31] will retry after 16.040898999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:46.976967    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:49.617032    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:49.730959    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:49.730959    6768 retry.go:31] will retry after 16.582879704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.589406    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:50.670822    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.670851    6768 retry.go:31] will retry after 12.887643821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:57.019347    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:59.627687    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:59.713200    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:59.713723    6768 retry.go:31] will retry after 31.011345009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:03.563906    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:03.651782    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:03.651782    6768 retry.go:31] will retry after 28.171942024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.318780    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:06.402870    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.402870    6768 retry.go:31] will retry after 31.304704952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:07.062212    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:06:17.102506    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:06:27.145042    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:06:30.731168    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:30.819096    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:30.819096    6768 retry.go:31] will retry after 35.987165188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:31.828981    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:31.906351    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:31.906351    6768 retry.go:31] will retry after 41.89524319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:37.186738    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:06:37.713791    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:37.796890    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:37.796890    6768 retry.go:31] will retry after 21.402180263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:47.232761    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:06:57.278368    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:06:59.204689    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:59.287141    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:59.287141    6768 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:06.812100    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:06.894801    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:06.894801    6768 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 02:07:07.318929    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:13.807325    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:13.898561    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:13.899092    6768 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:13.904986    6768 out.go:179] * Enabled addons: 
	I1217 02:07:13.908697    6768 addons.go:530] duration metric: took 1m57.5627021s for enable addons: enabled=[]
	W1217 02:07:17.361931    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:07:27.404743    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:07:37.439676    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:07:47.472644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:07:57.510945    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:08:07.543861    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:08:17.577088    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:08:27.612061    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:08:37.653192    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:08:47.695483    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:08:57.736771    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:09:07.775163    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:09:17.811223    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:09:27.847902    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:09:37.889672    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:09:47.930106    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:09:57.967423    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:10:08.007712    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:10:18.049946    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:10:28.083644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:10:38.119927    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:10:48.158175    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:10:58.200115    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:08.241744    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:16.871278    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 02:11:16.871278    6768 node_ready.go:38] duration metric: took 6m0.0008728s for node "no-preload-184000" to be "Ready" ...
	I1217 02:11:16.874572    6768 out.go:203] 
	W1217 02:11:16.876457    6768 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:11:16.876457    6768 out.go:285] * 
	* 
	W1217 02:11:16.879042    6768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:11:16.881673    6768 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 80
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-184000
helpers_test.go:244: (dbg) docker inspect no-preload-184000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed",
	        "Created": "2025-12-17T01:54:01.802457191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454689,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:05:04.431751717Z",
	            "FinishedAt": "2025-12-17T02:05:01.217443908Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hosts",
	        "LogPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed-json.log",
	        "Name": "/no-preload-184000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-184000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-184000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-184000",
	                "Source": "/var/lib/docker/volumes/no-preload-184000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-184000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-184000",
	                "name.minikube.sigs.k8s.io": "no-preload-184000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd75d9fe5c78c005b0249a246e3b62cf2a8873f5a0bf590eec1667b2401d46f3",
	            "SandboxKey": "/var/run/docker/netns/cd75d9fe5c78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63566"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63569"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63565"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-184000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6adb91d102dfa92bfa154127e93e39401be06a5d21df5043f3e85e012e93e321",
	                    "EndpointID": "2717bfe6e1d6a16c3b3b21a01d0c25052321fa1d05a920cee0a218e0ea604d53",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-184000",
	                        "335cbfb80690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 2 (571.5168ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25: (1.3638244s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ stop    │ -p newest-cni-383500 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-383500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:07:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:07:37.336708    6296 out.go:360] Setting OutFile to fd 968 ...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.380113    6296 out.go:374] Setting ErrFile to fd 1700...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.394455    6296 out.go:368] Setting JSON to false
	I1217 02:07:37.396490    6296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8845,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:07:37.397485    6296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:07:37.401853    6296 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:07:37.405009    6296 notify.go:221] Checking for updates...
	I1217 02:07:37.407761    6296 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:37.412054    6296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:07:37.415031    6296 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:07:37.416942    6296 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:07:37.418887    6296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 02:07:37.439676    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:37.422499    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:37.422499    6296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:07:37.541250    6296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:07:37.544536    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:37.790862    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:37.763465755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:37.793941    6296 out.go:179] * Using the docker driver based on existing profile
	I1217 02:07:37.795944    6296 start.go:309] selected driver: docker
	I1217 02:07:37.795944    6296 start.go:927] validating driver "docker" against &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:37.796941    6296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:07:37.881125    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:38.106129    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:38.085504737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:38.106129    6296 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:07:38.106129    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:38.106661    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:38.106789    6296 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:38.110370    6296 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 02:07:38.113499    6296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:07:38.115628    6296 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:07:38.118799    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:38.118867    6296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:07:38.118972    6296 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 02:07:38.119036    6296 cache.go:65] Caching tarball of preloaded images
	I1217 02:07:38.119094    6296 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 02:07:38.119094    6296 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 02:07:38.119094    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.197259    6296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:07:38.197259    6296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:07:38.197259    6296 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:07:38.197259    6296 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:07:38.197259    6296 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-383500"
	I1217 02:07:38.197259    6296 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:07:38.197259    6296 fix.go:54] fixHost starting: 
	I1217 02:07:38.204499    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.259240    6296 fix.go:112] recreateIfNeeded on newest-cni-383500: state=Stopped err=<nil>
	W1217 02:07:38.259240    6296 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:07:38.262335    6296 out.go:252] * Restarting existing docker container for "newest-cni-383500" ...
	I1217 02:07:38.265716    6296 cli_runner.go:164] Run: docker start newest-cni-383500
	I1217 02:07:38.804123    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.863188    6296 kic.go:430] container "newest-cni-383500" state is running.
	I1217 02:07:38.868900    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:38.924169    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.926083    6296 machine.go:94] provisionDockerMachine start ...
	I1217 02:07:38.928987    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:38.984001    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:38.984993    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:38.984993    6296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:07:38.986003    6296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:07:42.161557    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.161646    6296 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 02:07:42.166827    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.231443    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.231698    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.231698    6296 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 02:07:42.423907    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.432743    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.491085    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.491085    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.491085    6296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:07:42.667009    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:42.667009    6296 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:07:42.667009    6296 ubuntu.go:190] setting up certificates
	I1217 02:07:42.667009    6296 provision.go:84] configureAuth start
	I1217 02:07:42.671320    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:42.724474    6296 provision.go:143] copyHostCerts
	I1217 02:07:42.725072    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:07:42.725072    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:07:42.725072    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:07:42.726229    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:07:42.726229    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:07:42.726812    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:07:42.727386    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:07:42.727386    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:07:42.727386    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:07:42.728644    6296 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 02:07:42.882778    6296 provision.go:177] copyRemoteCerts
	I1217 02:07:42.886667    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:07:42.889412    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.946034    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:43.080244    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:07:43.111350    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:07:43.145228    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:07:43.176328    6296 provision.go:87] duration metric: took 509.312ms to configureAuth
	I1217 02:07:43.176328    6296 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:07:43.176328    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:43.180705    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.236378    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.237514    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.237514    6296 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:07:43.404492    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:07:43.404492    6296 ubuntu.go:71] root file system type: overlay
	I1217 02:07:43.405056    6296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:07:43.408624    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.465282    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.465408    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.465408    6296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:07:43.658319    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:07:43.662395    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.719191    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.719552    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.719552    6296 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:07:43.890999    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:43.890999    6296 machine.go:97] duration metric: took 4.9648419s to provisionDockerMachine
	I1217 02:07:43.890999    6296 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 02:07:43.890999    6296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:07:43.895385    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:07:43.899109    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.952181    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.085157    6296 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:07:44.092998    6296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:07:44.093086    6296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:07:44.093086    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:07:44.093465    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:07:44.094379    6296 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:07:44.099969    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:07:44.115031    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:07:44.146317    6296 start.go:296] duration metric: took 255.2637ms for postStartSetup
	I1217 02:07:44.150381    6296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:07:44.153098    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.206142    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.337637    6296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:07:44.346313    6296 fix.go:56] duration metric: took 6.1489614s for fixHost
	I1217 02:07:44.346313    6296 start.go:83] releasing machines lock for "newest-cni-383500", held for 6.1489614s
	I1217 02:07:44.350643    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:44.409164    6296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:07:44.413957    6296 ssh_runner.go:195] Run: cat /version.json
	I1217 02:07:44.414540    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.416694    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.466739    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.469418    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 02:07:44.591848    6296 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:07:44.598090    6296 ssh_runner.go:195] Run: systemctl --version
	I1217 02:07:44.614283    6296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:07:44.624324    6296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:07:44.628955    6296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:07:44.642200    6296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:07:44.642243    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:44.642333    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:44.642453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:44.671216    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:07:44.689408    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:07:44.702919    6296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:07:44.707856    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:07:44.727869    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.747180    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1217 02:07:44.751020    6296 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:07:44.751020    6296 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:07:44.766866    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.786853    6296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:07:44.806986    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:07:44.828346    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:07:44.848400    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:07:44.870349    6296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:07:44.887217    6296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:07:44.905216    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.047629    6296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:07:45.203749    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:45.203842    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:45.209421    6296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:07:45.236823    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.259331    6296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:07:45.337368    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.361492    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:07:45.381383    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:45.409600    6296 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:07:45.421762    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:07:45.435668    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:07:45.461708    6296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:07:45.616228    6296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:07:45.751670    6296 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:07:45.751670    6296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:07:45.778504    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:07:45.800985    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.956342    6296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:07:46.816501    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:07:46.840410    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:07:46.865817    6296 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:07:46.890943    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:46.914319    6296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:07:47.058242    6296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:07:47.214522    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.355565    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	W1217 02:07:47.472644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:47.382801    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:07:47.407455    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.558893    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:07:47.666138    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:47.686246    6296 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:07:47.690618    6296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:07:47.697013    6296 start.go:564] Will wait 60s for crictl version
	I1217 02:07:47.702316    6296 ssh_runner.go:195] Run: which crictl
	I1217 02:07:47.713878    6296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:07:47.755301    6296 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:07:47.758809    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.803772    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.845573    6296 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:07:47.849368    6296 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 02:07:47.978778    6296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:07:47.983162    6296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:07:47.993198    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.011887    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:48.072090    6296 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:07:48.073820    6296 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:07:48.073820    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:48.077080    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.110342    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.110411    6296 docker.go:621] Images already preloaded, skipping extraction
	I1217 02:07:48.113821    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.144461    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.144530    6296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:07:48.144530    6296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:07:48.144779    6296 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:07:48.149102    6296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:07:48.225894    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:48.225894    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:48.225894    6296 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:07:48.225894    6296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:07:48.226504    6296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:07:48.230913    6296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:07:48.243749    6296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:07:48.248634    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:07:48.262382    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:07:48.284386    6296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:07:48.306623    6296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 02:07:48.332101    6296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:07:48.341865    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.360919    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:48.498620    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:48.520308    6296 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 02:07:48.520346    6296 certs.go:195] generating shared ca certs ...
	I1217 02:07:48.520390    6296 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:48.520420    6296 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:07:48.521152    6296 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:07:48.521359    6296 certs.go:257] generating profile certs ...
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 02:07:48.522472    6296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 02:07:48.523217    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:07:48.523515    6296 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:07:48.523598    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:07:48.523888    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:07:48.524140    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:07:48.524399    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:07:48.525045    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:07:48.526649    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:07:48.558725    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:07:48.590333    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:07:48.621493    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:07:48.650907    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:07:48.678948    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:07:48.708871    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:07:48.738822    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:07:48.769873    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:07:48.801411    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:07:48.828208    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:07:48.859551    6296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:07:48.888197    6296 ssh_runner.go:195] Run: openssl version
	I1217 02:07:48.903194    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.920018    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:07:48.936734    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.943690    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.948571    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.997651    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:07:49.015514    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.035513    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:07:49.056511    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.065394    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.070742    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.117805    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:07:49.140198    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.156992    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:07:49.175485    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.184194    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.187479    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.237543    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:07:49.254809    6296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:07:49.269508    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:07:49.317073    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:07:49.365797    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:07:49.413853    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:07:49.462871    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:07:49.515512    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:07:49.558666    6296 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:49.563317    6296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:07:49.602899    6296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:07:49.616365    6296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:07:49.616365    6296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:07:49.622022    6296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:07:49.637152    6296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:07:49.641090    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.693295    6296 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.693843    6296 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-383500" cluster setting kubeconfig missing "newest-cni-383500" context setting]
	I1217 02:07:49.694722    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.716755    6296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:07:49.731850    6296 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:07:49.731850    6296 kubeadm.go:602] duration metric: took 115.4836ms to restartPrimaryControlPlane
	I1217 02:07:49.731850    6296 kubeadm.go:403] duration metric: took 173.1816ms to StartCluster
	I1217 02:07:49.731850    6296 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.731850    6296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.732839    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.734654    6296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:07:49.734654    6296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:07:49.734654    6296 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:70] Setting dashboard=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:49.734654    6296 addons.go:70] Setting default-storageclass=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.734654    6296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon dashboard=true in "newest-cni-383500"
	W1217 02:07:49.734654    6296 addons.go:248] addon dashboard should already be in state true
	I1217 02:07:49.735179    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.739634    6296 out.go:179] * Verifying Kubernetes components...
	I1217 02:07:49.743427    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.745812    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:49.809135    6296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:07:49.809532    6296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:07:49.812989    6296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:49.812989    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:07:49.816981    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.817010    6296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:07:49.818467    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:07:49.818467    6296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:07:49.823270    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.824987    6296 addons.go:239] Setting addon default-storageclass=true in "newest-cni-383500"
	I1217 02:07:49.825100    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.836645    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.889991    6296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:49.889991    6296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:07:49.892991    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.925992    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:49.943010    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.950996    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:50.005058    6296 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:07:50.009064    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.011068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.014077    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:07:50.014077    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:07:50.034057    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:07:50.034057    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:07:50.102553    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:07:50.102611    6296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:07:50.106900    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.124027    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:07:50.124027    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:07:50.189590    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:07:50.189677    6296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1217 02:07:50.190082    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.190082    6296 retry.go:31] will retry after 343.200838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.212250    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:07:50.212311    6296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:07:50.231619    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:07:50.231619    6296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:07:50.241078    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.241078    6296 retry.go:31] will retry after 338.608253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.254747    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:07:50.254794    6296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:07:50.277655    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:50.277655    6296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:07:50.303268    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.381205    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.381205    6296 retry.go:31] will retry after 204.689537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.510673    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.538343    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.585518    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.590250    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.625635    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.625793    6296 retry.go:31] will retry after 198.686568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.703247    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.703247    6296 retry.go:31] will retry after 199.792365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.713669    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.714671    6296 retry.go:31] will retry after 441.125735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.831068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.910787    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:50.921027    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.921027    6296 retry.go:31] will retry after 637.088373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.993148    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.993148    6296 retry.go:31] will retry after 819.774881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.009768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.161082    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:51.282295    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.282369    6296 retry.go:31] will retry after 677.278565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.510844    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.563702    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:51.642986    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.642986    6296 retry.go:31] will retry after 1.231128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.817677    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:51.902470    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.902470    6296 retry.go:31] will retry after 1.160161898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.964724    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:52.009393    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:52.053520    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.053520    6296 retry.go:31] will retry after 497.775491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.510530    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:52.556698    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:52.641425    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.641425    6296 retry.go:31] will retry after 893.419079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.880811    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:52.961643    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.961643    6296 retry.go:31] will retry after 1.354718896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.009905    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.068292    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:53.159843    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.159885    6296 retry.go:31] will retry after 830.811591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.510300    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.539679    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:53.634195    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.634195    6296 retry.go:31] will retry after 1.875797166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.997012    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:54.010116    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:54.085004    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.085004    6296 retry.go:31] will retry after 2.403477641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.321510    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:54.401677    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.401677    6296 retry.go:31] will retry after 2.197762331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.509750    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.011577    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.509949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.514301    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:55.590724    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:55.590724    6296 retry.go:31] will retry after 3.771224323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.010995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:56.493760    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:56.509755    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:56.580067    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.580067    6296 retry.go:31] will retry after 2.862008002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.606008    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:56.692846    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.693375    6296 retry.go:31] will retry after 3.419223727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:57.009866    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:57.510945    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:57.510327    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.010333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.511391    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.013796    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.367655    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:59.447582    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:59.457416    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.457416    6296 retry.go:31] will retry after 6.254269418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.510215    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:59.536524    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.536524    6296 retry.go:31] will retry after 4.240139996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.010517    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.118263    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:00.197472    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.197472    6296 retry.go:31] will retry after 5.486941273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.511349    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.012031    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.510877    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.510995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.511479    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.781390    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:03.867561    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:03.867561    6296 retry.go:31] will retry after 5.255488401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:04.011296    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:04.510695    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.011055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.510174    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.690069    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:05.718147    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:05.792389    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.792389    6296 retry.go:31] will retry after 3.294946391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:05.802187    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.802187    6296 retry.go:31] will retry after 6.599881974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:06.010721    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.509941    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.010092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:07.543861    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:07.511303    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.011059    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.511015    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.009909    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.092821    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:09.127423    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:09.180638    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.180716    6296 retry.go:31] will retry after 13.056189647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:09.211988    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.212069    6296 retry.go:31] will retry after 13.872512266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.510829    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.010907    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.513112    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.010572    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.509543    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.010570    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.409071    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:12.497495    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.497495    6296 retry.go:31] will retry after 9.788092681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.510004    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.011338    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.509984    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.010499    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.511126    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.010949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.511741    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.011278    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.511157    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.010863    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:17.577088    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:17.511273    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.010782    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.510594    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.011193    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.512050    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.011700    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.511001    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.010461    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.510457    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.011002    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.242227    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:22.290434    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:22.384800    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.384884    6296 retry.go:31] will retry after 11.75975207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:22.424758    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.424758    6296 retry.go:31] will retry after 15.557196078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.510556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.011645    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.090496    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:23.176544    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.176625    6296 retry.go:31] will retry after 13.26458747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.510872    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.011245    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.511483    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.011656    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.510967    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.012125    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.512672    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.011155    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:27.612061    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:27.512368    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.010889    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.511767    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.011035    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.512111    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.010919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.510464    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.010433    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.511392    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.010680    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.510963    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.011818    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.511638    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.011591    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.151810    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:34.242474    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.242474    6296 retry.go:31] will retry after 23.644538854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.513602    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.011269    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.511142    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.011267    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.446774    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:08:36.511283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:36.541778    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:36.541860    6296 retry.go:31] will retry after 14.024805043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:37.010743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:37.653192    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:37.510520    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.987959    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:08:38.011587    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:38.113276    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.113276    6296 retry.go:31] will retry after 20.609884455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.511817    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.012624    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.511353    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.011079    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.511636    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.011582    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.512671    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.011503    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.511640    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.011054    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.510485    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.011395    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.511333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.011435    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.513316    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.012600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.512307    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.012227    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.512888    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.011996    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.511276    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.011053    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.511776    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.011678    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:50.050889    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.050889    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.055201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:50.085770    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.085770    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.090316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:50.123762    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.123762    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.127529    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:50.157626    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.157626    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.163652    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:50.189945    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.189945    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.193620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:50.222819    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.222866    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.227818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:50.256909    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.256909    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.260970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:50.290387    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.290387    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.290387    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:50.290387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.357876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.357876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.420098    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.420098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.460376    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.460376    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.542989    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.542989    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:50.542989    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:50.570331    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:50.645772    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:50.645772    6296 retry.go:31] will retry after 16.344343138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:47.695483    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:53.075519    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.098924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:53.131675    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.131675    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.135542    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:53.166511    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.166511    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.170265    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:53.198547    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.198547    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.202694    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:53.232459    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.232459    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.235758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:53.263802    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.263802    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.268318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:53.296956    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.296956    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.301349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:53.331331    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.331331    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.335255    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:53.367520    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.367550    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.367577    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.367602    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.453750    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.453837    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:53.453887    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:53.485058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.485058    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.540050    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.540050    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.604101    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.604101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.146858    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.172227    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:56.203897    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.203941    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.207562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:56.236114    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.236114    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.240341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:56.274958    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.274958    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.280577    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:56.308906    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.308906    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.312811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:56.340777    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.340836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.343843    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:56.371408    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.371441    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.374771    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:56.406487    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.406487    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.410973    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:56.441247    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.441247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.441247    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.441247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.506877    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.506877    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.548841    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.548841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.633101    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.633101    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:56.633101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:56.659421    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.659457    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:57.892877    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:57.970838    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:57.970838    6296 retry.go:31] will retry after 27.385193451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.728649    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:58.834139    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.834680    6296 retry.go:31] will retry after 32.13321777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:59.213728    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.238361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:59.266298    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.266298    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.270295    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:59.299414    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.299414    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.302581    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:59.335627    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.335627    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.339238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:59.367042    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.367042    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.371258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:59.401507    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.401507    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.405468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:59.436657    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.436657    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.440955    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:59.471027    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.471027    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.474047    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:59.505164    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.505164    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.505164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:59.505164    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:59.533835    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.533835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.586695    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.587671    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.648841    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.648841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.688691    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.688691    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.777044    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.282707    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.307570    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:02.340326    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.340412    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.343993    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:02.374035    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.374079    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.377688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1217 02:08:57.736771    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:02.409724    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.409724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.414154    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:02.442993    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.442993    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.447591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:02.474966    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.474966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.479447    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:02.511675    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.511675    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.515939    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:02.544034    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.544034    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.548633    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:02.578196    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.578196    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.578196    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.578196    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.642449    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.643420    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:02.681562    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.681562    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.766017    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.766017    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:02.766017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:02.795166    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.795166    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.347132    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.372840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:05.424611    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.424686    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.428337    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:05.461682    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.461682    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.465790    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:05.495395    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.495395    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.499215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:05.528620    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.528620    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.532226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:05.560375    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.560375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.564119    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:05.595214    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.595214    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.600088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:05.633183    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.633183    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.636776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:05.664840    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.664840    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.664840    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.664840    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.718503    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.718503    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.781489    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.781489    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.821081    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.821081    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.905451    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.905451    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:05.905451    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:06.996471    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:09:07.077056    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:07.077056    6296 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:08.443326    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.470285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:08.499191    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.499191    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.503346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:08.531727    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.531727    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.535874    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:08.567724    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.567724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.571504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:08.601814    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.601814    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.605003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:08.638738    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.638815    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.642116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:08.672949    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.672949    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.676953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:08.706081    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.706145    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.709298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:08.737856    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.737856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.737856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.737856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.798236    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.798236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.838053    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.838053    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.925271    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.925271    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:08.925271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:08.952860    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.952934    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.505032    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.532273    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:11.560855    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.560907    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.564808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:11.595967    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.596024    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.599911    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:11.628443    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.628443    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.632103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:11.659899    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.659899    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.663896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:11.695830    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.695864    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.699333    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:11.728245    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.728314    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.731834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:11.762004    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.762038    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.765497    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:11.800437    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.800437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.800437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.800437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.850659    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.850659    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:11.927328    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.927328    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.968115    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.968115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:12.061366    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:12.061366    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:12.061366    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:07.775163    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:14.593463    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.619698    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:14.649625    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.649625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.653809    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:14.682807    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.682865    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.686225    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:14.716867    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.716867    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.720947    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:14.748712    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.748712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.753598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:14.786467    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.786467    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.790745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:14.820388    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.820388    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.824364    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:14.856683    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.856715    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.860387    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:14.907334    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.907388    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.907388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.907388    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.970536    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.971543    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:15.009837    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:15.009837    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:15.100833    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:15.100833    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:15.100833    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:15.129774    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:15.129838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.687506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.711884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:17.740676    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.740676    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.743807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:17.775526    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.775598    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.779196    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:17.810564    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.810564    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.815366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:17.847149    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.847149    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.850304    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:17.880825    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.880825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.884416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:17.913663    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.913663    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.917519    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:17.949675    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.949736    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.953399    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:17.981777    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.981777    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.981853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.981853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:18.045143    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:18.045143    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:18.085682    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:18.085682    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:18.174824    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:18.174862    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:18.174890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:18.201721    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:18.201721    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:20.754573    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.779418    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:20.815289    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.815336    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.821329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:20.849494    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.849566    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.853416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:20.886139    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.886213    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.890864    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:20.921623    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.921691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.925413    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:20.955001    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.955030    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.959115    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:20.986446    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.986446    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.990622    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:21.019381    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.019903    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:21.023386    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:21.049708    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.049708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:21.049708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:21.049708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:21.114512    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:21.114512    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:21.154312    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:21.154312    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:21.241835    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:21.241835    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:21.241835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:21.269935    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:21.269935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:17.811223    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:23.827385    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.851293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:23.884017    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.884017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.887852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:23.920819    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.920819    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.925124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:23.953397    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.953468    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.957090    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:23.987965    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.987965    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.992238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:24.021188    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.021188    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:24.027472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:24.059066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.059066    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:24.062927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:24.092066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.092066    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:24.096083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:24.130020    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.130083    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:24.130083    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:24.130083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:24.193264    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:24.193264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:24.233590    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:24.233590    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:24.334738    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:24.334738    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:24.334738    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:24.361711    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:24.361711    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:25.361736    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:09:25.443830    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:25.443830    6296 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:26.915928    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.940552    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:26.972265    6296 logs.go:282] 0 containers: []
	W1217 02:09:26.972334    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.975468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:27.004131    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.004131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:27.007688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:27.040755    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.040755    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:27.044298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:27.075607    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.075607    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:27.079764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:27.109726    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.109726    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:27.113807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:27.142060    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.142060    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:27.145049    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:27.179827    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.179898    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:27.183340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:27.212340    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.212340    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:27.212340    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:27.212340    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:27.290453    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:27.290453    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:27.290453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:27.317919    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:27.317919    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:27.372636    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:27.372636    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:27.434881    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:27.434881    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.980965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:30.007081    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:30.038766    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.038766    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:30.042837    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:30.074216    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.074277    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:30.077495    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:30.109815    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.109815    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:30.113543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:30.144692    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.144692    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:30.148595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:30.181530    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.181530    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:30.185056    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:30.230054    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.230054    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:30.233965    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:30.264421    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.264421    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:30.268191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:30.302463    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.302463    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:30.302463    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:30.302463    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:30.369905    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:30.369905    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:30.407364    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:30.407364    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:30.501045    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:30.501045    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:30.501045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:30.529058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:30.529119    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:30.973740    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:09:31.053832    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:31.053832    6296 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:31.057712    6296 out.go:179] * Enabled addons: 
	I1217 02:09:31.060716    6296 addons.go:530] duration metric: took 1m41.3245326s for enable addons: enabled=[]
	W1217 02:09:27.847902    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:33.093000    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:33.117479    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:33.148299    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.148299    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:33.152403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:33.180747    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.180747    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:33.184258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:33.214319    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.214389    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:33.217921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:33.244463    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.244463    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:33.248882    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:33.280520    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.280573    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:33.284251    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:33.313836    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.313883    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:33.318949    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:33.351545    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.351545    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:33.355242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:33.384638    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.384638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:33.384638    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:33.384638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:33.438624    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:33.438624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:33.503148    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:33.504145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:33.542770    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:33.542770    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:33.628872    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:33.628872    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:33.628872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:36.163766    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:36.190660    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:36.219485    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.219485    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:36.223169    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:36.253826    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.253826    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:36.257584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:36.289684    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.289684    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:36.293455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:36.321228    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.321228    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:36.326076    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:36.355893    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.355893    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:36.360432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:36.392307    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.392359    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:36.395377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:36.427797    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.427797    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:36.431432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:36.465462    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.465547    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:36.465590    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:36.465605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:36.515585    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:36.515688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:36.577828    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:36.577828    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:36.617923    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:36.617923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:36.706865    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:36.706865    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:36.706865    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.240583    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:39.269426    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:39.300548    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.300548    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:39.304455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:39.337640    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.337640    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:39.341427    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:39.375280    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.375280    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:39.379328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:39.408206    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.408291    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:39.413138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:39.439760    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.439760    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:39.443728    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:39.470865    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.471120    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:39.477630    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:39.510101    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.510101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:39.515759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:39.545423    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.545494    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:39.545494    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:39.545559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.574474    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:39.574474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:39.627410    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:39.627410    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:39.687852    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:39.687852    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:39.730823    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:39.730823    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:39.820771    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.326489    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:42.349989    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:42.381673    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.381673    6296 logs.go:284] No container was found matching "kube-apiserver"
	W1217 02:09:37.889672    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:42.385392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:42.414575    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.414575    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:42.418510    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:42.452120    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.452120    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:42.456157    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:42.484625    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.484625    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:42.487782    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:42.520235    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.520235    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:42.525546    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:42.558589    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.558589    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:42.561770    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:42.592364    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.592364    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:42.596368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:42.625522    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.625522    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:42.625522    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:42.625522    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:42.661616    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:42.661616    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:42.748046    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.748046    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:42.748046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:42.778854    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:42.778854    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:42.827860    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:42.827860    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.394220    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:45.418501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:45.453084    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.453132    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:45.457433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:45.491679    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.491679    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:45.495517    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:45.524934    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.524934    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:45.528788    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:45.559787    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.559837    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:45.563714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:45.608019    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.608104    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:45.612132    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:45.639869    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.639869    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:45.644002    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:45.671767    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.671767    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:45.675466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:45.704056    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.704104    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:45.704104    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:45.704104    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.766557    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:45.766557    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:45.807449    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:45.807449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:45.898686    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:45.898686    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:45.898686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:45.924614    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:45.924614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.482563    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:48.510137    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:48.546063    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.546063    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:48.551905    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:48.588536    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.588617    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:48.592628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:48.621540    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.621540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:48.625701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:48.653505    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.653505    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:48.659485    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:48.688940    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.689008    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:48.692649    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:48.718858    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.718858    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:48.722907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:48.752451    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.752451    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:48.755913    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:48.785865    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.785903    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:48.785903    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:48.785948    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.842730    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:48.843261    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:48.905352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:48.905352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:48.945271    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:48.945271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.027913    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.027963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:49.027963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:51.563182    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:51.587223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:51.619597    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.619621    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:51.623355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:51.652069    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.652152    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:51.655716    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:51.684602    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.684653    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:51.687735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:51.716327    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.716327    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:51.720054    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:51.750202    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.750266    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:51.753821    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:51.781863    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.781863    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:51.785648    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:51.814791    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.814841    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:51.818565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:51.850654    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.850654    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:51.850654    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:51.850654    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:51.912429    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:51.912429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:51.951795    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:51.951795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.035486    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.035486    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:52.035486    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:52.063472    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.063472    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:47.930106    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:54.631678    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:54.657392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:54.689037    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.689037    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:54.692460    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:54.723231    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.723231    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:54.729158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:54.759168    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.759168    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:54.762883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:54.792371    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.792371    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:54.796165    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:54.828375    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.828375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:54.832201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:54.862409    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.862476    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:54.866107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:54.897161    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.897161    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:54.900834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:54.947452    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.947452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:54.947452    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:54.947452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.016411    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.016411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.055628    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.055628    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.152557    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.152599    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:55.152599    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:55.180492    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.180492    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:57.741989    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:57.768328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:57.799200    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.799200    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:57.803065    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:57.832042    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.832042    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:57.835921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:57.863829    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.863891    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:57.867347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:57.896797    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.896822    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:57.900369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:57.929832    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.929907    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:57.933326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:57.960278    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.960278    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:57.964215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:57.992277    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.992324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:57.995951    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:58.026155    6296 logs.go:282] 0 containers: []
	W1217 02:09:58.026254    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.026254    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.026303    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.091999    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.091999    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.131520    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.131520    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.226831    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.226831    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:58.226831    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:58.256592    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.256635    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:00.809919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:00.842222    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:00.872955    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.872955    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:00.876666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:00.906031    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.906031    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:00.909593    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:00.939873    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.939946    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:00.943346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:00.972609    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.972643    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:00.975886    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:01.005269    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.005269    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.009766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:01.041677    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.041677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.048361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:01.081235    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.081312    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.084849    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:01.113437    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.113437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.113437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.113437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.160067    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.160624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.225071    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.225071    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.265307    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.265307    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.348506    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.348535    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:01.348571    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:57.967423    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:03.891628    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:03.925404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:03.965688    6296 logs.go:282] 0 containers: []
	W1217 02:10:03.965688    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:03.968982    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:04.006348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.006348    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.009769    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:04.039968    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.039968    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.044404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:04.078472    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.078472    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.081894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:04.113348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.113348    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.117138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:04.148885    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.148885    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.152756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:04.181559    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.181616    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.185351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:04.217017    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.217017    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.217017    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.217017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.284540    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.284540    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.324402    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.324402    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.409943    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:04.409943    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:04.409943    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:04.438771    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.438771    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:06.997897    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.024185    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:07.054915    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.055512    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.060167    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:07.089778    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.089778    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.093773    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:07.124641    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.124641    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.128016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:07.154834    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.154915    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.158505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:07.188568    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.188568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.192962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:07.225078    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.225078    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.228699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:07.258599    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.258659    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.262590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:07.291623    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.291623    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.291623    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:07.291623    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:07.322611    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.322611    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.374970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.374970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.438795    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.438795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:07.479442    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.479442    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.566162    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.072312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.096505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:10.125617    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.125617    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.129377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:10.157921    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.157921    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.161850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:10.191705    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.191705    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.196003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:10.224412    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.224482    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.229368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:10.258140    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.258140    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.261205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:10.292047    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.292047    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.296511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:10.325818    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.325818    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.329752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:10.359454    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.359530    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.359530    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.359530    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:10.413970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.413970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.476665    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.476665    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.516335    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.516335    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.602353    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.602353    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:10.602353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:10:08.007712    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:13.134148    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:13.159720    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:13.191534    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.191534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.195626    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:13.230035    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.230035    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.233817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:13.266476    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.266476    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.270598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:13.305852    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.305852    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.310349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:13.341805    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.341867    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.345346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:13.377945    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.377945    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.381659    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:13.411885    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.411957    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.416039    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:13.446642    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.446642    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.446642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.446642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.487083    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.487083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.574632    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.574632    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:13.574632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.604181    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.604702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:13.660020    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.660020    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.225038    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:16.248922    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:16.280247    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.280247    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:16.284285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:16.312596    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.312596    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:16.316952    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:16.345108    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.345108    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:16.348083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:16.377403    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.377403    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.380619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:16.410555    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.410555    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.414048    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:16.446454    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.446454    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.449405    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:16.478967    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.478967    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.484108    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:16.516422    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.516422    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.516422    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.516422    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.580305    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.580305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.618663    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.618663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.705105    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.705105    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:16.705105    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:16.732046    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.732046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:19.284431    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:19.307909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:19.340842    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.340842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:19.344830    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:19.371150    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.371150    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:19.374863    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:19.403216    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.403216    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:19.406907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:19.433979    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.433979    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:19.438046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:19.469636    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.469636    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:19.473675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:19.504296    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.504296    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.508671    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:19.535932    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.535932    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.539707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:19.567355    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.567416    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.567416    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.567416    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.629876    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.629876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.678547    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.678547    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.785306    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.785306    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:19.785371    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:19.813137    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.813137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.369643    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:10:18.049946    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:22.396731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:22.431018    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.431018    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:22.434688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:22.463307    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.463307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:22.467323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:22.497065    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.497065    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:22.500574    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:22.531497    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.531564    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:22.535088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:22.563706    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.563779    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:22.567344    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:22.602516    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.602597    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:22.606242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:22.637637    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.637699    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:22.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:22.668078    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.668078    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.668078    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.668078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.754963    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.754963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:22.754963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:22.783172    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.783222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.840048    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.840048    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.900137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.900137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.445900    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:25.472646    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:25.502929    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.502929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:25.506274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:25.537721    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.537721    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:25.543044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:25.572924    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.572924    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:25.576391    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:25.607737    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.607798    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:25.611457    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:25.644967    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.645041    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:25.648690    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:25.677801    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.677801    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:25.681530    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:25.709148    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.709148    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:25.715667    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:25.746892    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.746892    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:25.746892    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:25.746892    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:25.796336    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:25.796336    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.862353    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.862353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.902100    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.902100    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.988926    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.988926    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:25.988926    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:28.523475    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:28.549366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:28.580055    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.580055    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:28.583822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:28.615168    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.615168    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:28.618724    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:28.650344    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.650368    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:28.654014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:28.704033    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.704033    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:28.707699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:28.738871    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.738938    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:28.743270    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:28.775432    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.775432    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:28.779176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:28.810234    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.810351    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:28.814357    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:28.845783    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.845783    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:28.845783    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:28.845783    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:28.902626    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:28.902626    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.963758    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.963758    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:29.002141    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:29.002141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:29.104674    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:29.104674    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:29.104674    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:31.640270    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:31.668862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:31.703099    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.703099    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:31.706355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:31.737408    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.737408    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:31.741549    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:31.771462    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.771549    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:31.775645    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:31.803600    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.803600    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:31.807313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:31.835884    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.835884    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:31.840000    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:31.870518    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.870518    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:31.877548    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:31.905387    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.905387    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:31.909722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:31.938258    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.938284    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:31.938284    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:31.938284    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:32.000115    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:32.000115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:32.039351    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:32.039351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:32.128849    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:32.128849    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:32.128849    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:32.155670    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:32.155670    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:28.083644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:34.707099    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:34.732689    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:34.763625    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.763625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:34.767349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:34.797435    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.797435    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:34.801415    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:34.828785    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.828785    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:34.832654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:34.864748    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.864748    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:34.868392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:34.896365    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.896365    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:34.900474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:34.932681    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.932681    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:34.936571    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:34.966056    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.966056    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:34.969208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:34.998362    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.998362    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:34.998362    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:34.998362    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:35.036977    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:35.036977    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:35.134841    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:35.134841    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:35.134841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:35.162429    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:35.162429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:35.213960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:35.214015    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:37.779857    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:37.806799    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:37.840730    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.840730    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:37.846443    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:37.875504    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.875504    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:37.879215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:37.910068    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.910068    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:37.913551    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:37.942897    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.942897    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:37.946741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:37.978321    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.978321    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:37.982267    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:38.008421    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.008421    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:38.013043    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:38.043041    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.043041    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:38.049737    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:38.082117    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.082117    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:38.082117    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:38.082117    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:38.148970    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:38.148970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:38.189697    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:38.189697    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:38.276122    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:38.276122    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:38.276122    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:38.304355    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:38.304355    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:40.862712    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:40.889041    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:40.921169    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.921169    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:40.924297    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:40.956313    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.956356    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:40.960294    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:40.990144    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.990144    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:40.993876    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:41.026732    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.026803    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:41.030745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:41.073825    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.073825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:41.078152    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:41.105859    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.105859    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:41.111714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:41.143286    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.143324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:41.146776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:41.176314    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.176345    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:41.176345    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:41.176345    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:41.213266    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:41.213266    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:41.300305    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:41.300305    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:41.300305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:41.328560    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:41.328621    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:41.375953    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:41.375953    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:38.119927    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:43.941613    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:43.967455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:44.000199    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.000199    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:44.003568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:44.035058    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.035058    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:44.040590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:44.083687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.083687    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:44.087476    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:44.115776    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.115776    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:44.119318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:44.155471    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.155513    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:44.159433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:44.191599    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.191636    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:44.195145    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:44.228181    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.228211    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:44.231971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:44.259687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.259763    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:44.259763    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:44.259763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:44.323705    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:44.323705    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:44.365401    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:44.365401    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:44.453893    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:44.453893    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:44.453893    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:44.480694    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:44.480694    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.042501    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:47.067663    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:47.108433    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.108433    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:47.112206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:47.144336    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.144336    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:47.148449    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:47.182968    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.183049    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:47.186614    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:47.215738    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.215738    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:47.219595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:47.248444    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.248511    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:47.252434    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:47.280975    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.280975    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:47.284966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:47.317178    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.317178    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:47.321223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:47.352638    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.352638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:47.352638    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:47.352638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:47.390049    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:47.390049    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:47.479425    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:47.479425    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:47.479425    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:47.505331    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:47.505331    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.556431    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:47.556431    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.124255    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:50.151100    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:50.184499    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.184565    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:50.187696    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:50.221764    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.221764    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:50.225471    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:50.253823    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.253823    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:50.260470    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:50.289768    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.289815    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:50.295283    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:50.321597    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.321597    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:50.325774    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:50.356707    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.356707    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:50.360685    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:50.390099    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.390099    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:50.393971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:50.420950    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.420950    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:50.420950    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:50.420950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.484730    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:50.484730    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:50.523997    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:50.523997    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:50.618256    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:50.618256    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:50.618256    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:50.645077    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:50.645077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:48.158175    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:53.200622    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:53.223348    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:53.253589    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.253589    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:53.258688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:53.287647    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.287689    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:53.291555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:53.324358    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.324403    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:53.327650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:53.355417    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.355417    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:53.359780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:53.390012    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.390012    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:53.393536    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:53.420636    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.420672    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:53.424429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:53.453665    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.453744    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:53.456764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:53.486769    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.486836    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:53.486875    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:53.486875    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:53.552513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:53.552513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:53.593054    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:53.593054    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:53.683171    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:53.683207    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:53.683230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:53.712513    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:53.712513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:56.288600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:56.314380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:56.347447    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.347447    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:56.351158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:56.381779    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.381779    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:56.385232    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:56.423000    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.423000    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:56.427083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:56.456635    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.456635    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:56.460509    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:56.490868    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.490868    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:56.496594    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:56.523671    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.523671    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:56.527847    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:56.559992    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.559992    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:56.565352    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:56.591708    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.591708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:56.591708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:56.591708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:56.656572    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:56.656572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:56.696334    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:56.696334    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:56.788411    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:56.788411    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:56.788411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:56.815762    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:56.815762    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.370676    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:59.404615    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:59.440735    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.440735    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:59.446758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:59.475209    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.475209    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:59.479521    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:59.509465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.509465    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:59.513228    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:59.542409    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.542409    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:59.546008    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:59.575778    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.575778    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:59.579759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:59.613465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.613465    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:59.617266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:59.645245    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.645245    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:59.649170    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:59.680413    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.680449    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:59.680449    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:59.680449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:59.713987    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:59.713987    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.764930    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:59.764994    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:59.832077    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:59.832077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:59.870681    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:59.870681    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:59.953336    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:10:58.200115    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:11:02.457745    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:02.492666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:02.526665    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.526665    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:02.530862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:02.560353    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.560413    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:02.564099    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:02.595430    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.595430    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:02.599884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:02.629744    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.629744    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:02.633637    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:02.662623    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.662623    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:02.666817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:02.694696    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.694696    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:02.698194    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:02.727384    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.727442    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:02.731483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:02.766114    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.766114    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:02.766114    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:02.766114    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:02.830755    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:02.830755    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:02.870216    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:02.870216    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:02.958327    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.958327    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:02.958380    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:02.984980    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:02.984980    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.540158    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:05.564812    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:05.595638    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.595638    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:05.599748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:05.628748    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.628748    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:05.632878    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:05.666232    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.666257    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:05.670293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:05.699654    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.699654    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:05.703004    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:05.733113    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.733113    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:05.737096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:05.765591    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.765639    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:05.770398    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:05.796360    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.796360    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:05.800240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:05.829847    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.829914    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:05.829914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:05.829945    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.880789    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:05.880789    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:05.943002    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:05.943002    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:05.983389    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:05.983389    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.076023    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.076023    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:06.076023    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:08.608606    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:08.632215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:08.665017    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.665017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:08.669299    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:08.695355    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.695355    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:08.699306    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:08.729054    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.729054    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:08.732454    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:08.759881    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.759881    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:08.764328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:08.793695    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.793777    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:08.797908    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:08.826225    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.826225    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:08.829679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:08.859645    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.859645    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:08.863083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:08.893657    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.893657    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:08.893657    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:08.893657    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:08.958163    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:08.958163    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:08.997418    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:08.997418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.087973    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.087973    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:09.087973    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:09.115687    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.115687    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:11.697770    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.725676    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:11.758809    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.758809    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:11.762929    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:11.794198    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.794198    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:11.798023    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:11.828890    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.828890    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:11.833358    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:11.865217    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.865217    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:11.868915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:11.897672    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.897672    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:11.901235    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:11.931725    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.931808    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:11.935264    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:11.966263    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.966263    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:11.970422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:11.999856    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.999856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:11.999856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:11.999856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.064137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.064137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.102491    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.102491    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.183568    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.183568    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:12.183568    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:12.212178    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.212178    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:11:08.241744    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:16.871278    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 02:11:16.871278    6768 node_ready.go:38] duration metric: took 6m0.0008728s for node "no-preload-184000" to be "Ready" ...
	I1217 02:11:16.874572    6768 out.go:203] 
	W1217 02:11:16.876457    6768 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:11:16.876457    6768 out.go:285] * 
	W1217 02:11:16.879042    6768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:11:16.881673    6768 out.go:203] 
	I1217 02:11:14.772821    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.797656    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:14.826900    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.826900    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.829894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:14.859202    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.859202    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.862783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:14.891414    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.891414    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.895052    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:14.925404    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.925404    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:14.928966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:14.959295    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.959330    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:14.962893    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:14.991696    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.991730    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:14.994776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:15.025468    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.025468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.031674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:15.060661    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.060661    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.060733    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.060733    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.120513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.120513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.159608    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.159608    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.244418    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.244418    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:15.244418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:15.271288    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.271288    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	
	
	==> Docker <==
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325544488Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325628897Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325641498Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325647799Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325653800Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325676802Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325716506Z" level=info msg="Initializing buildkit"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.423454913Z" level=info msg="Completed buildkit initialization"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434194190Z" level=info msg="Daemon has completed initialization"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434389711Z" level=info msg="API listen on [::]:2376"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434491222Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 02:05:13 no-preload-184000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434476421Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 02:05:14 no-preload-184000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Loaded network plugin cni"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 02:05:14 no-preload-184000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:19.093689    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:19.094935    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:19.096676    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:19.098207    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:19.099318    8250 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.752411] CPU: 12 PID: 469779 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8b9b910b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f8b9b910af6.
	[  +0.000001] RSP: 002b:00007fffc85e9670 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.875329] CPU: 10 PID: 469916 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fdfac8dab20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fdfac8daaf6.
	[  +0.000001] RSP: 002b:00007ffd587a0060 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:11:19 up  2:30,  0 user,  load average: 0.58, 0.95, 2.29
	Linux no-preload-184000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:11:16 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:16 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 17 02:11:16 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:16 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:16 no-preload-184000 kubelet[8081]: E1217 02:11:16.807393    8081 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:16 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:16 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:17 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 17 02:11:17 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:17 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:17 no-preload-184000 kubelet[8092]: E1217 02:11:17.580357    8092 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:17 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:17 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:18 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 17 02:11:18 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:18 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:18 no-preload-184000 kubelet[8120]: E1217 02:11:18.343937    8120 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:18 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:18 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:11:18 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 485.
	Dec 17 02:11:18 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:18 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:11:19 no-preload-184000 kubelet[8239]: E1217 02:11:19.080656    8239 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:11:19 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:11:19 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 2 (574.9673ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/SecondStart (378.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (121.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 02:05:33.762819    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:05:38.676326    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:05:40.740129    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-044000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:00.962143    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-278200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:05.624422    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:28.670329    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-278200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:45.478157    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:06:52.414448    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m58.5552493s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/metrics-apiservice.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-deployment.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-rbac.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/metrics-server-service.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_addons_e23971240287a88151a2b5edd52daaba3879ba4a_13.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-windows-amd64.exe addons enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-383500
helpers_test.go:244: (dbg) docker inspect newest-cni-383500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638",
	        "Created": "2025-12-17T01:57:11.100405677Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 433106,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T01:57:11.454843914Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hostname",
	        "HostsPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hosts",
	        "LogPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638-json.log",
	        "Name": "/newest-cni-383500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-383500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-383500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-383500",
	                "Source": "/var/lib/docker/volumes/newest-cni-383500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-383500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-383500",
	                "name.minikube.sigs.k8s.io": "newest-cni-383500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "6caee67017413de1f9dc483ad9459600dcb6111052c799eaefbc16f4be8d0125",
	            "SandboxKey": "/var/run/docker/netns/6caee6701741",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63415"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63416"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63417"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63418"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63419"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-383500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0a3f566cb0e1e68eaf85fc99a3ee131940651a4c9a15e291bc077be33f07b4e",
	                    "EndpointID": "2d14072f1129746f62b2ed0cbaec8f7f3430521dededc919044dc0c745590f04",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-383500",
	                        "58edac260513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500: exit status 6 (594.7053ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:07:32.675587    9972 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:248: status error: exit status 6 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25: (1.1712185s)
E1217 02:07:33.942961    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:261: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ addons  │ enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                         │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ stop    │ -p default-k8s-diff-port-278200 --alsologtostderr -v=3                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-278200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                    │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:05:02
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:05:02.629645    6768 out.go:360] Setting OutFile to fd 852 ...
	I1217 02:05:02.671051    6768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:02.671051    6768 out.go:374] Setting ErrFile to fd 1172...
	I1217 02:05:02.671051    6768 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:05:02.687471    6768 out.go:368] Setting JSON to false
	I1217 02:05:02.690746    6768 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8691,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:05:02.690781    6768 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:05:02.694017    6768 out.go:179] * [no-preload-184000] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:05:02.699245    6768 notify.go:221] Checking for updates...
	I1217 02:05:02.701769    6768 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:02.703938    6768 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:05:02.706929    6768 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:05:02.709501    6768 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:05:02.712185    6768 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:05:02.715207    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:02.716501    6768 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:05:02.837461    6768 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:05:02.842258    6768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:03.079348    6768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:05:03.054281062 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:05:03.087094    6768 out.go:179] * Using the docker driver based on existing profile
	I1217 02:05:03.091220    6768 start.go:309] selected driver: docker
	I1217 02:05:03.091220    6768 start.go:927] validating driver "docker" against &{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mo
untOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:03.091220    6768 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:05:03.188409    6768 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:05:03.434313    6768 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:05:03.415494177 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:05:03.434313    6768 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1217 02:05:03.434313    6768 cni.go:84] Creating CNI manager for ""
	I1217 02:05:03.434313    6768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:05:03.434313    6768 start.go:353] cluster config:
	{Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpti
mizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:03.439310    6768 out.go:179] * Starting "no-preload-184000" primary control-plane node in "no-preload-184000" cluster
	I1217 02:05:03.441310    6768 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:05:03.443310    6768 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:05:03.448311    6768 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:05:03.448311    6768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:05:03.448311    6768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1
	I1217 02:05:03.448311    6768 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1
	I1217 02:05:03.545905    6768 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:05:03.545905    6768 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:05:03.545905    6768 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:05:03.545905    6768 start.go:360] acquireMachinesLock for no-preload-184000: {Name:mk58fd592c3ebf84a2801325b861ffe90e12015f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:03.545905    6768 start.go:364] duration metric: took 0s to acquireMachinesLock for "no-preload-184000"
	I1217 02:05:03.546921    6768 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:05:03.546921    6768 fix.go:54] fixHost starting: 
	I1217 02:05:03.557903    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:03.760117    6768 fix.go:112] recreateIfNeeded on no-preload-184000: state=Stopped err=<nil>
	W1217 02:05:03.760117    6768 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:05:03.764113    6768 out.go:252] * Restarting existing docker container for "no-preload-184000" ...
	I1217 02:05:03.767110    6768 cli_runner.go:164] Run: docker start no-preload-184000
	I1217 02:05:05.253549    6768 cli_runner.go:217] Completed: docker start no-preload-184000: (1.4864164s)
	I1217 02:05:05.260543    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:05.357919    6768 kic.go:430] container "no-preload-184000" state is running.
	I1217 02:05:05.364922    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:05.444478    6768 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\config.json ...
	I1217 02:05:05.447474    6768 machine.go:94] provisionDockerMachine start ...
	I1217 02:05:05.453480    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:05.545241    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:05.545241    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:05.545241    6768 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:05:05.549583    6768 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:05:06.370661    6768 cache.go:107] acquiring lock: {Name:mk30c175c099bb24f3495934fe82d3318ba32edc Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.370661    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 exists
	I1217 02:05:06.371228    6768 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.13.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\coredns\\coredns_v1.13.1" took 2.9228733s
	I1217 02:05:06.371228    6768 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.13.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\coredns\coredns_v1.13.1 succeeded
	I1217 02:05:06.375872    6768 cache.go:107] acquiring lock: {Name:mke46a29e5c99e04c7a644622126cc43b1380a20 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.375872    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 exists
	I1217 02:05:06.376401    6768 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-apiserver_v1.35.0-beta.0" took 2.9275166s
	I1217 02:05:06.376463    6768 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-apiserver_v1.35.0-beta.0 succeeded
	I1217 02:05:06.376989    6768 cache.go:107] acquiring lock: {Name:mk352f5bf629a9838a6dbf3b2a16ff0c4dd2ff59 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.377073    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 exists
	I1217 02:05:06.377073    6768 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\gcr.io\\k8s-minikube\\storage-provisioner_v5" took 2.9287184s
	I1217 02:05:06.377073    6768 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\gcr.io\k8s-minikube\storage-provisioner_v5 succeeded
	I1217 02:05:06.397758    6768 cache.go:107] acquiring lock: {Name:mk68f5204ebd9e2dce8f758b2902807726f293ec Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.397758    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 exists
	I1217 02:05:06.397758    6768 cache.go:96] cache image "registry.k8s.io/etcd:3.6.5-0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\etcd_3.6.5-0" took 2.9494026s
	I1217 02:05:06.397758    6768 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.5-0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\etcd_3.6.5-0 succeeded
	I1217 02:05:06.401745    6768 cache.go:107] acquiring lock: {Name:mk54af8aa524bd74f58a38f00f25557a0a8b1257 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.401745    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 exists
	I1217 02:05:06.401745    6768 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-scheduler_v1.35.0-beta.0" took 2.9533893s
	I1217 02:05:06.401745    6768 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-scheduler_v1.35.0-beta.0 succeeded
	I1217 02:05:06.434118    6768 cache.go:107] acquiring lock: {Name:mkc9166e5abcdc7c5aabe1d15411e835cbf56dcd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.434118    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 exists
	I1217 02:05:06.434118    6768 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-proxy_v1.35.0-beta.0" took 2.9857618s
	I1217 02:05:06.436060    6768 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-proxy_v1.35.0-beta.0 succeeded
	I1217 02:05:06.469702    6768 cache.go:107] acquiring lock: {Name:mkb5ac027c23fea34e68c48194a83612fb356ae6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.470703    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 exists
	I1217 02:05:06.470703    6768 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\pause_3.10.1" took 3.022347s
	I1217 02:05:06.470703    6768 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\pause_3.10.1 succeeded
	I1217 02:05:06.521227    6768 cache.go:107] acquiring lock: {Name:mkc9c075124416290ee42b83d8bf6270650b8e31 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:05:06.521321    6768 cache.go:115] \\?\Volume{0feb5ec2-51cb-400f-b6fe-f54ae77fbfba}\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 exists
	I1217 02:05:06.521321    6768 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.35.0-beta.0" -> "C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\cache\\images\\amd64\\registry.k8s.io\\kube-controller-manager_v1.35.0-beta.0" took 3.0729641s
	I1217 02:05:06.521321    6768 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.35.0-beta.0 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\images\amd64\registry.k8s.io\kube-controller-manager_v1.35.0-beta.0 succeeded
	I1217 02:05:06.521321    6768 cache.go:87] Successfully saved all images to host disk.
	I1217 02:05:08.728111    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 02:05:08.728111    6768 ubuntu.go:182] provisioning hostname "no-preload-184000"
	I1217 02:05:08.732574    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:08.788471    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:08.788517    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:08.788517    6768 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-184000 && echo "no-preload-184000" | sudo tee /etc/hostname
	I1217 02:05:08.984320    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-184000
	
	I1217 02:05:08.988540    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.045241    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:09.046042    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:09.046073    6768 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-184000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-184000/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-184000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:05:09.239223    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:09.239223    6768 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:05:09.239223    6768 ubuntu.go:190] setting up certificates
	I1217 02:05:09.239223    6768 provision.go:84] configureAuth start
	I1217 02:05:09.242936    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:09.300521    6768 provision.go:143] copyHostCerts
	I1217 02:05:09.300924    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:05:09.300924    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:05:09.301449    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:05:09.301878    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:05:09.301878    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:05:09.302546    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:05:09.303134    6768 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:05:09.303134    6768 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:05:09.303134    6768 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:05:09.303843    6768 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.no-preload-184000 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-184000]
	I1217 02:05:09.513127    6768 provision.go:177] copyRemoteCerts
	I1217 02:05:09.517075    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:05:09.519665    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.573516    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:09.696089    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:05:09.723663    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:05:09.749598    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1217 02:05:09.779713    6768 provision.go:87] duration metric: took 540.4619ms to configureAuth
	I1217 02:05:09.779730    6768 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:05:09.779917    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:09.784013    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:09.841680    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:09.841680    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:09.841680    6768 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:05:10.010881    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:05:10.010926    6768 ubuntu.go:71] root file system type: overlay
	I1217 02:05:10.011054    6768 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:05:10.014899    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.071419    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:10.071649    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:10.071649    6768 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:05:10.253657    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:05:10.257912    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.314224    6768 main.go:143] libmachine: Using SSH client type: native
	I1217 02:05:10.314288    6768 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63566 <nil> <nil>}
	I1217 02:05:10.314288    6768 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:05:10.496294    6768 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:05:10.496294    6768 machine.go:97] duration metric: took 5.0487445s to provisionDockerMachine
	I1217 02:05:10.496294    6768 start.go:293] postStartSetup for "no-preload-184000" (driver="docker")
	I1217 02:05:10.496294    6768 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:05:10.501160    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:05:10.504159    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.558430    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:10.698125    6768 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:05:10.706351    6768 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:05:10.706403    6768 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:05:10.706403    6768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:05:10.706403    6768 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:05:10.707067    6768 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:05:10.711519    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:05:10.725151    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:05:10.754903    6768 start.go:296] duration metric: took 258.6046ms for postStartSetup
	I1217 02:05:10.759061    6768 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:05:10.762269    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:10.816597    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:10.943522    6768 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:05:10.958658    6768 fix.go:56] duration metric: took 7.411626s for fixHost
	I1217 02:05:10.958658    6768 start.go:83] releasing machines lock for "no-preload-184000", held for 7.4126419s
	I1217 02:05:10.962906    6768 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-184000
	I1217 02:05:11.017406    6768 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:05:11.021445    6768 ssh_runner.go:195] Run: cat /version.json
	I1217 02:05:11.021510    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:11.024650    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:11.076963    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:11.082042    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	W1217 02:05:11.198310    6768 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:05:11.210947    6768 ssh_runner.go:195] Run: systemctl --version
	I1217 02:05:11.226813    6768 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:05:11.235667    6768 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:05:11.242573    6768 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:05:11.255007    6768 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:05:11.255007    6768 start.go:496] detecting cgroup driver to use...
	I1217 02:05:11.255007    6768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:11.256009    6768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:11.283766    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:05:11.303122    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:05:11.317795    6768 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:05:11.321726    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:05:11.340924    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	W1217 02:05:11.357913    6768 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:05:11.357979    6768 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:05:11.359375    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I1217 02:05:11.377107    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:05:11.395476    6768 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:05:11.418432    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:05:11.437643    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:05:11.458621    6768 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:05:11.477313    6768 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:05:11.495090    6768 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:05:11.513809    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:11.664976    6768 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:05:11.829322    6768 start.go:496] detecting cgroup driver to use...
	I1217 02:05:11.829433    6768 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:05:11.835895    6768 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:05:11.860815    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:05:11.883615    6768 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:05:11.960567    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:05:11.983346    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:05:12.002889    6768 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:05:12.032515    6768 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:05:12.044249    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:05:12.056817    6768 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:05:12.080834    6768 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:05:12.249437    6768 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:05:12.397968    6768 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:05:12.397968    6768 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:05:12.425594    6768 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:05:12.447409    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:12.604225    6768 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:05:13.440560    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:05:13.466105    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:05:13.489994    6768 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:05:13.514704    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:05:13.536605    6768 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:05:13.693215    6768 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:05:13.846670    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:14.004258    6768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 02:05:14.030193    6768 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:05:14.055627    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:14.209153    6768 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:05:14.322039    6768 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:05:14.339530    6768 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:05:14.345129    6768 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:05:14.353653    6768 start.go:564] Will wait 60s for crictl version
	I1217 02:05:14.357665    6768 ssh_runner.go:195] Run: which crictl
	I1217 02:05:14.368483    6768 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:05:14.413189    6768 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:05:14.417273    6768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:05:14.462617    6768 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:05:14.502904    6768 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:05:14.506033    6768 cli_runner.go:164] Run: docker exec -t no-preload-184000 dig +short host.docker.internal
	I1217 02:05:14.646991    6768 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:05:14.651689    6768 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:05:14.659909    6768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:14.680414    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:14.733079    6768 kubeadm.go:884] updating cluster {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:05:14.734079    6768 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:05:14.737079    6768 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:05:14.767963    6768 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:05:14.767963    6768 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:05:14.767963    6768 kubeadm.go:935] updating node { 192.168.94.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:05:14.768480    6768 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-184000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.94.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:05:14.771542    6768 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:05:14.846616    6768 cni.go:84] Creating CNI manager for ""
	I1217 02:05:14.846636    6768 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:05:14.846636    6768 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1217 02:05:14.846636    6768 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.94.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-184000 NodeName:no-preload-184000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.94.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.94.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPod
Path:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:05:14.846636    6768 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.94.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "no-preload-184000"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.94.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.94.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:05:14.851632    6768 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:05:14.863585    6768 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:05:14.868130    6768 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:05:14.879683    6768 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:05:14.899726    6768 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:05:14.919991    6768 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2225 bytes)
	I1217 02:05:14.944949    6768 ssh_runner.go:195] Run: grep 192.168.94.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:05:14.952431    6768 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.94.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:05:14.972008    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:15.116248    6768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:15.140002    6768 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000 for IP: 192.168.94.2
	I1217 02:05:15.140002    6768 certs.go:195] generating shared ca certs ...
	I1217 02:05:15.140002    6768 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:15.140318    6768 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:05:15.140318    6768 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:05:15.140951    6768 certs.go:257] generating profile certs ...
	I1217 02:05:15.141475    6768 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\client.key
	I1217 02:05:15.141776    6768 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key.d162c569
	I1217 02:05:15.141823    6768 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key
	I1217 02:05:15.142712    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:05:15.142929    6768 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:05:15.142993    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:05:15.143196    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:05:15.143459    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:05:15.143743    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:05:15.144134    6768 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:05:15.145445    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:05:15.174639    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:05:15.206543    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:05:15.237390    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:05:15.269725    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:05:15.299081    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:05:15.331970    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:05:15.364258    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\no-preload-184000\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:05:15.394880    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:05:15.424665    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:05:15.454305    6768 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:05:15.482694    6768 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:05:15.505956    6768 ssh_runner.go:195] Run: openssl version
	I1217 02:05:15.520857    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.538884    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:05:15.556769    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.565231    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.569694    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:05:15.618090    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:05:15.636651    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.657687    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:05:15.678656    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.686438    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.690381    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:05:15.738620    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:05:15.756906    6768 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.776662    6768 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:05:15.794117    6768 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.801453    6768 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.805697    6768 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:05:15.853109    6768 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:05:15.871938    6768 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:05:15.885136    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:05:15.931869    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:05:15.978751    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:05:16.028376    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:05:16.079257    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:05:16.133289    6768 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:05:16.177187    6768 kubeadm.go:401] StartCluster: {Name:no-preload-184000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:no-preload-184000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:05:16.181577    6768 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:05:16.216215    6768 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:05:16.228229    6768 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:05:16.228229    6768 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:05:16.233407    6768 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:05:16.246099    6768 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:05:16.251775    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.304124    6768 kubeconfig.go:47] verify endpoint returned: get endpoint: "no-preload-184000" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:16.305294    6768 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "no-preload-184000" cluster setting kubeconfig missing "no-preload-184000" context setting]
	I1217 02:05:16.305850    6768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.326797    6768 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:05:16.342507    6768 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:05:16.342507    6768 kubeadm.go:602] duration metric: took 114.2766ms to restartPrimaryControlPlane
	I1217 02:05:16.342507    6768 kubeadm.go:403] duration metric: took 165.3768ms to StartCluster
	I1217 02:05:16.342507    6768 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.342507    6768 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:05:16.343620    6768 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:05:16.344231    6768 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:05:16.344231    6768 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:05:16.344231    6768 addons.go:70] Setting storage-provisioner=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:239] Setting addon storage-provisioner=true in "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:70] Setting dashboard=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 config.go:182] Loaded profile config "no-preload-184000": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:05:16.344231    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.344231    6768 addons.go:70] Setting default-storageclass=true in profile "no-preload-184000"
	I1217 02:05:16.344231    6768 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "no-preload-184000"
	I1217 02:05:16.344231    6768 addons.go:239] Setting addon dashboard=true in "no-preload-184000"
	W1217 02:05:16.344929    6768 addons.go:248] addon dashboard should already be in state true
	I1217 02:05:16.344929    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.347844    6768 out.go:179] * Verifying Kubernetes components...
	I1217 02:05:16.354044    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.354121    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.355814    6768 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:05:16.357052    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.409696    6768 addons.go:239] Setting addon default-storageclass=true in "no-preload-184000"
	I1217 02:05:16.409696    6768 host.go:66] Checking if "no-preload-184000" exists ...
	I1217 02:05:16.410688    6768 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:05:16.412689    6768 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:16.412689    6768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:05:16.416693    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.417698    6768 cli_runner.go:164] Run: docker container inspect no-preload-184000 --format={{.State.Status}}
	I1217 02:05:16.423696    6768 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:05:16.425691    6768 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:05:16.428703    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:05:16.428703    6768 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:05:16.431694    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.467691    6768 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:16.468689    6768 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:05:16.469695    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.471696    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-184000
	I1217 02:05:16.482691    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.518691    6768 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\no-preload-184000\id_rsa Username:docker}
	I1217 02:05:16.521691    6768 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:05:16.604232    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:16.609620    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:05:16.609620    6768 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:05:16.632701    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:05:16.632701    6768 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:05:16.648900    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:16.655841    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:05:16.655841    6768 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:05:16.700825    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:05:16.700825    6768 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:05:16.727124    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:05:16.728137    6768 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1217 02:05:16.747122    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:05:16.747167    6768 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:05:16.768592    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:05:16.768592    6768 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:05:16.800138    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.800273    6768 retry.go:31] will retry after 331.277361ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.806289    6768 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" no-preload-184000
	W1217 02:05:16.807169    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.807169    6768 retry.go:31] will retry after 367.14462ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.821991    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:05:16.821991    6768 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:05:16.842976    6768 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:16.842976    6768 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:05:16.864982    6768 node_ready.go:35] waiting up to 6m0s for node "no-preload-184000" to be "Ready" ...
	I1217 02:05:16.867979    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:16.963061    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:16.963061    6768 retry.go:31] will retry after 179.721934ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.138499    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:17.147072    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:17.178163    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:17.232301    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.232367    6768 retry.go:31] will retry after 261.645604ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.232463    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.232532    6768 retry.go:31] will retry after 358.922489ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.264584    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.264642    6768 retry.go:31] will retry after 293.195494ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.499020    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:05:17.564644    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:17.598253    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:17.609802    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.609802    6768 retry.go:31] will retry after 356.11648ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.728986    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.728986    6768 retry.go:31] will retry after 414.908289ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:17.728986    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.728986    6768 retry.go:31] will retry after 471.765196ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:17.972892    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:18.048428    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.048428    6768 retry.go:31] will retry after 848.614748ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.149277    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:05:18.205928    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:18.270282    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.270282    6768 retry.go:31] will retry after 717.444443ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:18.309651    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.309651    6768 retry.go:31] will retry after 981.836066ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.901981    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:18.981321    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.981857    6768 retry.go:31] will retry after 1.188790069s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:18.992863    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:19.074677    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.074677    6768 retry.go:31] will retry after 947.510236ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.297489    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:19.377867    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:19.377937    6768 retry.go:31] will retry after 1.104512362s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.028161    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:20.102126    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.102126    6768 retry.go:31] will retry after 2.018338834s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.175978    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:20.253210    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.253210    6768 retry.go:31] will retry after 2.536835686s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.487984    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:20.611020    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:20.611556    6768 retry.go:31] will retry after 1.621989786s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.126652    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:22.202802    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.202802    6768 retry.go:31] will retry after 2.213473046s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.239657    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:22.319492    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.319565    6768 retry.go:31] will retry after 2.644500815s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.794504    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:22.901867    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:22.901867    6768 retry.go:31] will retry after 2.159892203s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.422186    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:24.505078    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.505078    6768 retry.go:31] will retry after 5.38992916s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:24.969459    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:05:25.066905    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:25.098830    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.098830    6768 retry.go:31] will retry after 2.819506289s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:25.172740    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:25.172777    6768 retry.go:31] will retry after 5.817482434s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:26.902270    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:29.785276   10580 kubeadm.go:319] error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	I1217 02:05:29.785276   10580 kubeadm.go:319] 
	I1217 02:05:29.785276   10580 kubeadm.go:319] To see the stack trace of this error execute with --v=5 or higher
	I1217 02:05:29.791358   10580 kubeadm.go:319] [init] Using Kubernetes version: v1.35.0-beta.0
	I1217 02:05:29.791358   10580 kubeadm.go:319] [preflight] Running pre-flight checks
	I1217 02:05:29.791358   10580 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1217 02:05:29.791358   10580 kubeadm.go:319] KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	I1217 02:05:29.791885   10580 kubeadm.go:319] CONFIG_NAMESPACES: enabled
	I1217 02:05:29.791966   10580 kubeadm.go:319] CONFIG_NET_NS: enabled
	I1217 02:05:29.792106   10580 kubeadm.go:319] CONFIG_PID_NS: enabled
	I1217 02:05:29.792212   10580 kubeadm.go:319] CONFIG_IPC_NS: enabled
	I1217 02:05:29.792322   10580 kubeadm.go:319] CONFIG_UTS_NS: enabled
	I1217 02:05:29.792428   10580 kubeadm.go:319] CONFIG_CPUSETS: enabled
	I1217 02:05:29.792578   10580 kubeadm.go:319] CONFIG_MEMCG: enabled
	I1217 02:05:29.792647   10580 kubeadm.go:319] CONFIG_INET: enabled
	I1217 02:05:29.792742   10580 kubeadm.go:319] CONFIG_EXT4_FS: enabled
	I1217 02:05:29.792840   10580 kubeadm.go:319] CONFIG_PROC_FS: enabled
	I1217 02:05:29.792946   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	I1217 02:05:29.793101   10580 kubeadm.go:319] CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_FAIR_GROUP_SCHED: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUPS: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_CPUACCT: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_DEVICE: enabled
	I1217 02:05:29.793180   10580 kubeadm.go:319] CONFIG_CGROUP_FREEZER: enabled
	I1217 02:05:29.793715   10580 kubeadm.go:319] CONFIG_CGROUP_PIDS: enabled
	I1217 02:05:29.793854   10580 kubeadm.go:319] CONFIG_CGROUP_SCHED: enabled
	I1217 02:05:29.793953   10580 kubeadm.go:319] CONFIG_OVERLAY_FS: enabled
	I1217 02:05:29.794112   10580 kubeadm.go:319] CONFIG_AUFS_FS: not set - Required for aufs.
	I1217 02:05:29.794256   10580 kubeadm.go:319] CONFIG_BLK_DEV_DM: enabled
	I1217 02:05:29.794355   10580 kubeadm.go:319] CONFIG_CFS_BANDWIDTH: enabled
	I1217 02:05:29.794459   10580 kubeadm.go:319] CONFIG_SECCOMP: enabled
	I1217 02:05:29.794742   10580 kubeadm.go:319] CONFIG_SECCOMP_FILTER: enabled
	I1217 02:05:29.794802   10580 kubeadm.go:319] OS: Linux
	I1217 02:05:29.794969   10580 kubeadm.go:319] CGROUPS_CPU: enabled
	I1217 02:05:29.795102   10580 kubeadm.go:319] CGROUPS_CPUACCT: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1217 02:05:29.795263   10580 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1217 02:05:29.795785   10580 kubeadm.go:319] CGROUPS_BLKIO: enabled
	I1217 02:05:29.795959   10580 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1217 02:05:29.796062   10580 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1217 02:05:29.796062   10580 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1217 02:05:29.796062   10580 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1217 02:05:29.798726   10580 out.go:252]   - Generating certificates and keys ...
	I1217 02:05:29.798726   10580 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1217 02:05:29.798726   10580 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1217 02:05:29.799345   10580 kubeadm.go:319] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I1217 02:05:29.799533   10580 kubeadm.go:319] [certs] Using existing front-proxy-ca certificate authority
	I1217 02:05:29.799703   10580 kubeadm.go:319] [certs] Using existing front-proxy-client certificate and key on disk
	I1217 02:05:29.799861   10580 kubeadm.go:319] [certs] Using existing etcd/ca certificate authority
	I1217 02:05:29.800020   10580 kubeadm.go:319] [certs] Using existing etcd/server certificate and key on disk
	I1217 02:05:29.800151   10580 kubeadm.go:319] [certs] Using existing etcd/peer certificate and key on disk
	I1217 02:05:29.800313   10580 kubeadm.go:319] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I1217 02:05:29.800441   10580 kubeadm.go:319] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I1217 02:05:29.800526   10580 kubeadm.go:319] [certs] Using the existing "sa" key
	I1217 02:05:29.800681   10580 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1217 02:05:29.800781   10580 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1217 02:05:29.800906   10580 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1217 02:05:29.801499   10580 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1217 02:05:29.804029   10580 out.go:252]   - Booting up control plane ...
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1217 02:05:29.804029   10580 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1217 02:05:29.804614   10580 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1217 02:05:29.805159   10580 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1217 02:05:29.805159   10580 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1217 02:05:29.805683   10580 kubeadm.go:319] [kubelet-check] The kubelet is not healthy after 4m0.001314016s
	I1217 02:05:29.805683   10580 kubeadm.go:319] 
	I1217 02:05:29.805683   10580 kubeadm.go:319] Unfortunately, an error has occurred, likely caused by:
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- The kubelet is not running
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I1217 02:05:29.805778   10580 kubeadm.go:319] 
	I1217 02:05:29.805778   10580 kubeadm.go:319] If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I1217 02:05:29.805778   10580 kubeadm.go:319] 	- 'systemctl status kubelet'
	I1217 02:05:29.806377   10580 kubeadm.go:319] 	- 'journalctl -xeu kubelet'
	I1217 02:05:29.806377   10580 kubeadm.go:319] 
	I1217 02:05:29.806377   10580 kubeadm.go:403] duration metric: took 8m4.1029248s to StartCluster
	I1217 02:05:29.806377   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1217 02:05:29.810341   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1217 02:05:29.871764   10580 cri.go:89] found id: ""
	I1217 02:05:29.871764   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.871764   10580 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:05:29.871764   10580 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1217 02:05:29.876168   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1217 02:05:29.927013   10580 cri.go:89] found id: ""
	I1217 02:05:29.927013   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.927013   10580 logs.go:284] No container was found matching "etcd"
	I1217 02:05:29.927013   10580 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1217 02:05:29.931518   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1217 02:05:29.980022   10580 cri.go:89] found id: ""
	I1217 02:05:29.980022   10580 logs.go:282] 0 containers: []
	W1217 02:05:29.980022   10580 logs.go:284] No container was found matching "coredns"
	I1217 02:05:29.980022   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1217 02:05:29.984478   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1217 02:05:30.032552   10580 cri.go:89] found id: ""
	I1217 02:05:30.032552   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.032552   10580 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:05:30.032552   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1217 02:05:30.037694   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1217 02:05:30.082177   10580 cri.go:89] found id: ""
	I1217 02:05:30.082177   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.082177   10580 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:05:30.082177   10580 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1217 02:05:30.087245   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1217 02:05:30.130585   10580 cri.go:89] found id: ""
	I1217 02:05:30.130585   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.130585   10580 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:05:30.130585   10580 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1217 02:05:30.137646   10580 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1217 02:05:30.177235   10580 cri.go:89] found id: ""
	I1217 02:05:30.177235   10580 logs.go:282] 0 containers: []
	W1217 02:05:30.177235   10580 logs.go:284] No container was found matching "kindnet"
	I1217 02:05:30.177235   10580 logs.go:123] Gathering logs for container status ...
	I1217 02:05:30.177235   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:05:30.227457   10580 logs.go:123] Gathering logs for kubelet ...
	I1217 02:05:30.227457   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:05:30.291457   10580 logs.go:123] Gathering logs for dmesg ...
	I1217 02:05:30.291457   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:05:30.331904   10580 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:05:30.331904   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:05:30.416101   10580 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:05:30.405239   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.406412   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.407374   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.408863   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.410358   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:05:30.405239   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.406412   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.407374   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.408863   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:05:30.410358   10466 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:05:30.416101   10580 logs.go:123] Gathering logs for Docker ...
	I1217 02:05:30.416101   10580 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:05:30.444965   10580 out.go:434] Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	W1217 02:05:30.445965   10580 out.go:285] * 
	W1217 02:05:30.445965   10580 out.go:285] X Error starting cluster: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:05:30.445965   10580 out.go:285] * 
	W1217 02:05:30.447753   10580 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:05:30.453258   10580 out.go:203] 
	W1217 02:05:30.456588   10580 out.go:285] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.35.0-beta.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.35.0-beta.0
	[preflight] Running pre-flight checks
	[preflight] The system verification failed. Printing the output from the verification:
	KERNEL_VERSION: 5.15.153.1-microsoft-standard-WSL2
	CONFIG_NAMESPACES: enabled
	CONFIG_NET_NS: enabled
	CONFIG_PID_NS: enabled
	CONFIG_IPC_NS: enabled
	CONFIG_UTS_NS: enabled
	CONFIG_CPUSETS: enabled
	CONFIG_MEMCG: enabled
	CONFIG_INET: enabled
	CONFIG_EXT4_FS: enabled
	CONFIG_PROC_FS: enabled
	CONFIG_NETFILTER_XT_TARGET_REDIRECT: enabled
	CONFIG_NETFILTER_XT_MATCH_COMMENT: enabled
	CONFIG_FAIR_GROUP_SCHED: enabled
	CONFIG_CGROUPS: enabled
	CONFIG_CGROUP_CPUACCT: enabled
	CONFIG_CGROUP_DEVICE: enabled
	CONFIG_CGROUP_FREEZER: enabled
	CONFIG_CGROUP_PIDS: enabled
	CONFIG_CGROUP_SCHED: enabled
	CONFIG_OVERLAY_FS: enabled
	CONFIG_AUFS_FS: not set - Required for aufs.
	CONFIG_BLK_DEV_DM: enabled
	CONFIG_CFS_BANDWIDTH: enabled
	CONFIG_SECCOMP: enabled
	CONFIG_SECCOMP_FILTER: enabled
	OS: Linux
	CGROUPS_CPU: enabled
	CGROUPS_CPUACCT: enabled
	CGROUPS_CPUSET: enabled
	CGROUPS_DEVICES: enabled
	CGROUPS_FREEZER: enabled
	CGROUPS_MEMORY: enabled
	CGROUPS_PIDS: enabled
	CGROUPS_HUGETLB: enabled
	CGROUPS_BLKIO: enabled
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "super-admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	[patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	[kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	[kubelet-check] The kubelet is not healthy after 4m0.001314016s
	
	Unfortunately, an error has occurred, likely caused by:
		- The kubelet is not running
		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
		- 'systemctl status kubelet'
		- 'journalctl -xeu kubelet'
	
	
	stderr:
		[WARNING Swap]: swap is supported for cgroup v2 only. The kubelet must be properly configured to use swap. Please refer to https://kubernetes.io/docs/concepts/architecture/nodes/#swap-memory, or disable swap on the node
		[WARNING SystemVerification]: cgroups v1 support is deprecated and will be removed in a future release. Please migrate to cgroups v2. To explicitly enable cgroups v1 support for kubelet v1.35 or newer, you must set the kubelet configuration option 'FailCgroupV1' to 'false'. You must also explicitly skip this validation. For more information, see https://git.k8s.io/enhancements/keps/sig-node/5573-remove-cgroup-v1
		[WARNING Service-kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error: error execution phase wait-control-plane: failed while waiting for the kubelet to start: The HTTP call equal to 'curl -sSL http://127.0.0.1:10248/healthz' returned error: Get "http://127.0.0.1:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused
	
	To see the stack trace of this error execute with --v=5 or higher
	
	W1217 02:05:30.457182   10580 out.go:285] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W1217 02:05:30.457182   10580 out.go:285] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I1217 02:05:30.459905   10580 out.go:203] 
	I1217 02:05:27.923285    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:28.002844    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:28.002844    6768 retry.go:31] will retry after 5.747361639s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:29.900036    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:29.991553    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:29.991553    6768 retry.go:31] will retry after 9.429682843s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:30.993971    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:31.105446    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:31.105446    6768 retry.go:31] will retry after 5.178420591s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:33.754429    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:33.845352    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:33.845402    6768 retry.go:31] will retry after 9.642479435s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.288994    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:36.371093    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:36.371618    6768 retry.go:31] will retry after 14.211846335s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:36.936896    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:39.427367    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:39.502910    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:39.503030    6768 retry.go:31] will retry after 10.108696058s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:43.493020    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:43.580923    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:43.580923    6768 retry.go:31] will retry after 16.040898999s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:46.976967    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:49.617032    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:05:49.730959    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:49.730959    6768 retry.go:31] will retry after 16.582879704s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.589406    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:05:50.670822    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:50.670851    6768 retry.go:31] will retry after 12.887643821s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:05:57.019347    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:05:59.627687    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:05:59.713200    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:05:59.713723    6768 retry.go:31] will retry after 31.011345009s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:03.563906    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:03.651782    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:03.651782    6768 retry.go:31] will retry after 28.171942024s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.318780    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:06.402870    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:06.402870    6768 retry.go:31] will retry after 31.304704952s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:07.062212    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:06:17.102506    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:06:27.145042    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:06:30.731168    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:06:30.819096    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:30.819096    6768 retry.go:31] will retry after 35.987165188s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:31.828981    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:06:31.906351    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:31.906351    6768 retry.go:31] will retry after 41.89524319s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:37.186738    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:06:37.713791    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:37.796890    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:06:37.796890    6768 retry.go:31] will retry after 21.402180263s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:47.232761    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:06:57.278368    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:06:59.204689    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:06:59.287141    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:06:59.287141    6768 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:06.812100    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:06.894801    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:06.894801    6768 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	W1217 02:07:07.318929    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:13.807325    6768 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:13.898561    6768 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:13.899092    6768 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:07:13.904986    6768 out.go:179] * Enabled addons: 
	I1217 02:07:13.908697    6768 addons.go:530] duration metric: took 1m57.5627021s for enable addons: enabled=[]
	W1217 02:07:17.361931    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:07:27.404743    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	
	
	==> Docker <==
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386670361Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386753370Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386763871Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386768771Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386775572Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386796774Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.386828078Z" level=info msg="Initializing buildkit"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.501948357Z" level=info msg="Completed buildkit initialization"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511624614Z" level=info msg="Daemon has completed initialization"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511803733Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511841238Z" level=info msg="API listen on [::]:2376"
	Dec 17 01:57:21 newest-cni-383500 dockerd[1197]: time="2025-12-17T01:57:21.511803133Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 01:57:21 newest-cni-383500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 01:57:22 newest-cni-383500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Loaded network plugin cni"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 01:57:22 newest-cni-383500 cri-dockerd[1491]: time="2025-12-17T01:57:22Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 01:57:22 newest-cni-383500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:07:33.759067   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:33.760437   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:33.761375   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:33.762906   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:07:33.764323   13065 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000002] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000002] FS:  0000000000000000 GS:  0000000000000000
	[  +5.854693] CPU: 4 PID: 461784 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000003] RIP: 0033:0x7fc56db92b20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fc56db92af6.
	[  +0.000001] RSP: 002b:00007fffd59b2fe0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000000] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.890415] CPU: 4 PID: 461911 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7ff5a808ab20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7ff5a808aaf6.
	[  +0.000001] RSP: 002b:00007ffc5c7667f0 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:07:33 up  2:26,  0 user,  load average: 0.66, 1.33, 2.76
	Linux newest-cni-383500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:07:30 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 481.
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:31 newest-cni-383500 kubelet[12889]: E1217 02:07:31.100411   12889 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 482.
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:31 newest-cni-383500 kubelet[12902]: E1217 02:07:31.833123   12902 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:07:31 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:07:32 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 483.
	Dec 17 02:07:32 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:32 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:32 newest-cni-383500 kubelet[12929]: E1217 02:07:32.583156   12929 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:07:32 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:07:32 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:07:33 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 484.
	Dec 17 02:07:33 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:33 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:07:33 newest-cni-383500 kubelet[12958]: E1217 02:07:33.347004   12958 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:07:33 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:07:33 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500: exit status 6 (570.8411ms)

                                                
                                                
-- stdout --
	Stopped
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 02:07:34.819813    9176 status.go:458] kubeconfig endpoint: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:263: status error: exit status 6 (may be ok)
helpers_test.go:265: "newest-cni-383500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (121.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (381.39s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0
E1217 02:07:57.253836    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:08:04.326487    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:08:07.202326    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:08:14.172465    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:08:15.483963    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:08:46.396737    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:09:01.848234    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:09:30.283595    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:10:06.463932    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:10:13.038761    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-044000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:10:22.402820    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:10:24.921699    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:10:33.766942    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:10:38.680412    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:11:00.967167    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-278200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:11:05.629772    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 105 (6m14.8166053s)

                                                
                                                
-- stdout --
	* [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	* Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	* Pulling base image v0.0.48-1765661130-22141 ...
	  - kubeadm.pod-network-cidr=10.42.0.0/16
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	  - Using image registry.k8s.io/echoserver:1.4
	  - Using image docker.io/kubernetesui/dashboard:v2.7.0
	* Enabled addons: 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 02:07:37.336708    6296 out.go:360] Setting OutFile to fd 968 ...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.380113    6296 out.go:374] Setting ErrFile to fd 1700...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.394455    6296 out.go:368] Setting JSON to false
	I1217 02:07:37.396490    6296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8845,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:07:37.397485    6296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:07:37.401853    6296 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:07:37.405009    6296 notify.go:221] Checking for updates...
	I1217 02:07:37.407761    6296 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:37.412054    6296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:07:37.415031    6296 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:07:37.416942    6296 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:07:37.418887    6296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 02:07:37.422499    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:37.422499    6296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:07:37.541250    6296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:07:37.544536    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:37.790862    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:37.763465755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:37.793941    6296 out.go:179] * Using the docker driver based on existing profile
	I1217 02:07:37.795944    6296 start.go:309] selected driver: docker
	I1217 02:07:37.795944    6296 start.go:927] validating driver "docker" against &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:37.796941    6296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:07:37.881125    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:38.106129    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:38.085504737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:38.106129    6296 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:07:38.106129    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:38.106661    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:38.106789    6296 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:38.110370    6296 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 02:07:38.113499    6296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:07:38.115628    6296 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:07:38.118799    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:38.118867    6296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:07:38.118972    6296 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 02:07:38.119036    6296 cache.go:65] Caching tarball of preloaded images
	I1217 02:07:38.119094    6296 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 02:07:38.119094    6296 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 02:07:38.119094    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.197259    6296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:07:38.197259    6296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:07:38.197259    6296 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:07:38.197259    6296 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:07:38.197259    6296 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-383500"
	I1217 02:07:38.197259    6296 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:07:38.197259    6296 fix.go:54] fixHost starting: 
	I1217 02:07:38.204499    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.259240    6296 fix.go:112] recreateIfNeeded on newest-cni-383500: state=Stopped err=<nil>
	W1217 02:07:38.259240    6296 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:07:38.262335    6296 out.go:252] * Restarting existing docker container for "newest-cni-383500" ...
	I1217 02:07:38.265716    6296 cli_runner.go:164] Run: docker start newest-cni-383500
	I1217 02:07:38.804123    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.863188    6296 kic.go:430] container "newest-cni-383500" state is running.
	I1217 02:07:38.868900    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:38.924169    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.926083    6296 machine.go:94] provisionDockerMachine start ...
	I1217 02:07:38.928987    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:38.984001    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:38.984993    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:38.984993    6296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:07:38.986003    6296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:07:42.161557    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.161646    6296 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 02:07:42.166827    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.231443    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.231698    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.231698    6296 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 02:07:42.423907    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.432743    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.491085    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.491085    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.491085    6296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:07:42.667009    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:42.667009    6296 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:07:42.667009    6296 ubuntu.go:190] setting up certificates
	I1217 02:07:42.667009    6296 provision.go:84] configureAuth start
	I1217 02:07:42.671320    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:42.724474    6296 provision.go:143] copyHostCerts
	I1217 02:07:42.725072    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:07:42.725072    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:07:42.725072    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:07:42.726229    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:07:42.726229    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:07:42.726812    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:07:42.727386    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:07:42.727386    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:07:42.727386    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:07:42.728644    6296 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 02:07:42.882778    6296 provision.go:177] copyRemoteCerts
	I1217 02:07:42.886667    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:07:42.889412    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.946034    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:43.080244    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:07:43.111350    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:07:43.145228    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:07:43.176328    6296 provision.go:87] duration metric: took 509.312ms to configureAuth
	I1217 02:07:43.176328    6296 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:07:43.176328    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:43.180705    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.236378    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.237514    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.237514    6296 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:07:43.404492    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:07:43.404492    6296 ubuntu.go:71] root file system type: overlay
	I1217 02:07:43.405056    6296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:07:43.408624    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.465282    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.465408    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.465408    6296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:07:43.658319    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:07:43.662395    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.719191    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.719552    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.719552    6296 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:07:43.890999    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:43.890999    6296 machine.go:97] duration metric: took 4.9648419s to provisionDockerMachine
	I1217 02:07:43.890999    6296 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 02:07:43.890999    6296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:07:43.895385    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:07:43.899109    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.952181    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.085157    6296 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:07:44.092998    6296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:07:44.093086    6296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:07:44.093086    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:07:44.093465    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:07:44.094379    6296 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:07:44.099969    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:07:44.115031    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:07:44.146317    6296 start.go:296] duration metric: took 255.2637ms for postStartSetup
	I1217 02:07:44.150381    6296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:07:44.153098    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.206142    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.337637    6296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:07:44.346313    6296 fix.go:56] duration metric: took 6.1489614s for fixHost
	I1217 02:07:44.346313    6296 start.go:83] releasing machines lock for "newest-cni-383500", held for 6.1489614s
	I1217 02:07:44.350643    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:44.409164    6296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:07:44.413957    6296 ssh_runner.go:195] Run: cat /version.json
	I1217 02:07:44.414540    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.416694    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.466739    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.469418    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 02:07:44.591848    6296 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:07:44.598090    6296 ssh_runner.go:195] Run: systemctl --version
	I1217 02:07:44.614283    6296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:07:44.624324    6296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:07:44.628955    6296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:07:44.642200    6296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:07:44.642243    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:44.642333    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:44.642453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:44.671216    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:07:44.689408    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:07:44.702919    6296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:07:44.707856    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:07:44.727869    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.747180    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1217 02:07:44.751020    6296 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:07:44.751020    6296 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:07:44.766866    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.786853    6296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:07:44.806986    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:07:44.828346    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:07:44.848400    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:07:44.870349    6296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:07:44.887217    6296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:07:44.905216    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.047629    6296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:07:45.203749    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:45.203842    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:45.209421    6296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:07:45.236823    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.259331    6296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:07:45.337368    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.361492    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:07:45.381383    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:45.409600    6296 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:07:45.421762    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:07:45.435668    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:07:45.461708    6296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:07:45.616228    6296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:07:45.751670    6296 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:07:45.751670    6296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:07:45.778504    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:07:45.800985    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.956342    6296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:07:46.816501    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:07:46.840410    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:07:46.865817    6296 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:07:46.890943    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:46.914319    6296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:07:47.058242    6296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:07:47.214522    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.355565    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I1217 02:07:47.382801    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:07:47.407455    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.558893    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:07:47.666138    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:47.686246    6296 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:07:47.690618    6296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:07:47.697013    6296 start.go:564] Will wait 60s for crictl version
	I1217 02:07:47.702316    6296 ssh_runner.go:195] Run: which crictl
	I1217 02:07:47.713878    6296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:07:47.755301    6296 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:07:47.758809    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.803772    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.845573    6296 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:07:47.849368    6296 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 02:07:47.978778    6296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:07:47.983162    6296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:07:47.993198    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.011887    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:48.072090    6296 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:07:48.073820    6296 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:07:48.073820    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:48.077080    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.110342    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.110411    6296 docker.go:621] Images already preloaded, skipping extraction
	I1217 02:07:48.113821    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.144461    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.144530    6296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:07:48.144530    6296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:07:48.144779    6296 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:07:48.149102    6296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:07:48.225894    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:48.225894    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:48.225894    6296 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:07:48.225894    6296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:07:48.226504    6296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:07:48.230913    6296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:07:48.243749    6296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:07:48.248634    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:07:48.262382    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:07:48.284386    6296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:07:48.306623    6296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 02:07:48.332101    6296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:07:48.341865    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.360919    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:48.498620    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:48.520308    6296 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 02:07:48.520346    6296 certs.go:195] generating shared ca certs ...
	I1217 02:07:48.520390    6296 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:48.520420    6296 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:07:48.521152    6296 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:07:48.521359    6296 certs.go:257] generating profile certs ...
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 02:07:48.522472    6296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 02:07:48.523217    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:07:48.523515    6296 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:07:48.523598    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:07:48.523888    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:07:48.524140    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:07:48.524399    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:07:48.525045    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:07:48.526649    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:07:48.558725    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:07:48.590333    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:07:48.621493    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:07:48.650907    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:07:48.678948    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:07:48.708871    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:07:48.738822    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:07:48.769873    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:07:48.801411    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:07:48.828208    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:07:48.859551    6296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:07:48.888197    6296 ssh_runner.go:195] Run: openssl version
	I1217 02:07:48.903194    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.920018    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:07:48.936734    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.943690    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.948571    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.997651    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:07:49.015514    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.035513    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:07:49.056511    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.065394    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.070742    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.117805    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:07:49.140198    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.156992    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:07:49.175485    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.184194    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.187479    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.237543    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:07:49.254809    6296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:07:49.269508    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:07:49.317073    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:07:49.365797    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:07:49.413853    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:07:49.462871    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:07:49.515512    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:07:49.558666    6296 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:49.563317    6296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:07:49.602899    6296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:07:49.616365    6296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:07:49.616365    6296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:07:49.622022    6296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:07:49.637152    6296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:07:49.641090    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.693295    6296 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.693843    6296 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-383500" cluster setting kubeconfig missing "newest-cni-383500" context setting]
	I1217 02:07:49.694722    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.716755    6296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:07:49.731850    6296 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:07:49.731850    6296 kubeadm.go:602] duration metric: took 115.4836ms to restartPrimaryControlPlane
	I1217 02:07:49.731850    6296 kubeadm.go:403] duration metric: took 173.1816ms to StartCluster
	I1217 02:07:49.731850    6296 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.731850    6296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.732839    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.734654    6296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:07:49.734654    6296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:07:49.734654    6296 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:70] Setting dashboard=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:49.734654    6296 addons.go:70] Setting default-storageclass=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.734654    6296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon dashboard=true in "newest-cni-383500"
	W1217 02:07:49.734654    6296 addons.go:248] addon dashboard should already be in state true
	I1217 02:07:49.735179    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.739634    6296 out.go:179] * Verifying Kubernetes components...
	I1217 02:07:49.743427    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.745812    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:49.809135    6296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:07:49.809532    6296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:07:49.812989    6296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:49.812989    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:07:49.816981    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.817010    6296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:07:49.818467    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:07:49.818467    6296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:07:49.823270    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.824987    6296 addons.go:239] Setting addon default-storageclass=true in "newest-cni-383500"
	I1217 02:07:49.825100    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.836645    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.889991    6296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:49.889991    6296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:07:49.892991    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.925992    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:49.943010    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.950996    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:50.005058    6296 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:07:50.009064    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.011068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.014077    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:07:50.014077    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:07:50.034057    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:07:50.034057    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:07:50.102553    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:07:50.102611    6296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:07:50.106900    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.124027    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:07:50.124027    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:07:50.189590    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:07:50.189677    6296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1217 02:07:50.190082    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.190082    6296 retry.go:31] will retry after 343.200838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.212250    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:07:50.212311    6296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:07:50.231619    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:07:50.231619    6296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:07:50.241078    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.241078    6296 retry.go:31] will retry after 338.608253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.254747    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:07:50.254794    6296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:07:50.277655    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:50.277655    6296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:07:50.303268    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.381205    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.381205    6296 retry.go:31] will retry after 204.689537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.510673    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.538343    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.585518    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.590250    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.625635    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.625793    6296 retry.go:31] will retry after 198.686568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.703247    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.703247    6296 retry.go:31] will retry after 199.792365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.713669    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.714671    6296 retry.go:31] will retry after 441.125735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.831068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.910787    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:50.921027    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.921027    6296 retry.go:31] will retry after 637.088373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.993148    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.993148    6296 retry.go:31] will retry after 819.774881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.009768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.161082    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:51.282295    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.282369    6296 retry.go:31] will retry after 677.278565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.510844    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.563702    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:51.642986    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.642986    6296 retry.go:31] will retry after 1.231128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.817677    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:51.902470    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.902470    6296 retry.go:31] will retry after 1.160161898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.964724    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:52.009393    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:52.053520    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.053520    6296 retry.go:31] will retry after 497.775491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.510530    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:52.556698    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:52.641425    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.641425    6296 retry.go:31] will retry after 893.419079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.880811    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:52.961643    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.961643    6296 retry.go:31] will retry after 1.354718896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.009905    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.068292    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:53.159843    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.159885    6296 retry.go:31] will retry after 830.811591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.510300    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.539679    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:53.634195    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.634195    6296 retry.go:31] will retry after 1.875797166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.997012    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:54.010116    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:54.085004    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.085004    6296 retry.go:31] will retry after 2.403477641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.321510    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:54.401677    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.401677    6296 retry.go:31] will retry after 2.197762331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.509750    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.011577    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.509949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.514301    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:55.590724    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:55.590724    6296 retry.go:31] will retry after 3.771224323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.010995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:56.493760    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:56.509755    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:56.580067    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.580067    6296 retry.go:31] will retry after 2.862008002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.606008    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:56.692846    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.693375    6296 retry.go:31] will retry after 3.419223727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:57.009866    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:57.510327    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.010333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.511391    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.013796    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.367655    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:59.447582    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:59.457416    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.457416    6296 retry.go:31] will retry after 6.254269418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.510215    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:59.536524    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.536524    6296 retry.go:31] will retry after 4.240139996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.010517    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.118263    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:00.197472    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.197472    6296 retry.go:31] will retry after 5.486941273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.511349    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.012031    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.510877    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.510995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.511479    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.781390    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:03.867561    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:03.867561    6296 retry.go:31] will retry after 5.255488401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:04.011296    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:04.510695    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.011055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.510174    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.690069    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:05.718147    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:05.792389    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.792389    6296 retry.go:31] will retry after 3.294946391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:05.802187    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.802187    6296 retry.go:31] will retry after 6.599881974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:06.010721    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.509941    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.010092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.511303    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.011059    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.511015    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.009909    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.092821    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:09.127423    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:09.180638    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.180716    6296 retry.go:31] will retry after 13.056189647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:09.211988    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.212069    6296 retry.go:31] will retry after 13.872512266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.510829    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.010907    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.513112    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.010572    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.509543    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.010570    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.409071    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:12.497495    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.497495    6296 retry.go:31] will retry after 9.788092681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.510004    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.011338    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.509984    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.010499    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.511126    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.010949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.511741    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.011278    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.511157    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.010863    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.511273    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.010782    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.510594    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.011193    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.512050    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.011700    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.511001    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.010461    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.510457    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.011002    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.242227    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:22.290434    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:22.384800    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.384884    6296 retry.go:31] will retry after 11.75975207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:22.424758    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.424758    6296 retry.go:31] will retry after 15.557196078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.510556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.011645    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.090496    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:23.176544    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.176625    6296 retry.go:31] will retry after 13.26458747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.510872    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.011245    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.511483    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.011656    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.510967    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.012125    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.512672    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.011155    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.512368    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.010889    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.511767    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.011035    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.512111    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.010919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.510464    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.010433    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.511392    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.010680    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.510963    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.011818    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.511638    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.011591    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.151810    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:34.242474    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.242474    6296 retry.go:31] will retry after 23.644538854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.513602    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.011269    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.511142    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.011267    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.446774    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:08:36.511283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:36.541778    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:36.541860    6296 retry.go:31] will retry after 14.024805043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:37.010743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.510520    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.987959    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:08:38.011587    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:38.113276    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.113276    6296 retry.go:31] will retry after 20.609884455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.511817    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.012624    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.511353    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.011079    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.511636    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.011582    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.512671    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.011503    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.511640    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.011054    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.510485    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.011395    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.511333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.011435    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.513316    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.012600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.512307    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.012227    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.512888    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.011996    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.511276    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.011053    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.511776    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.011678    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:50.050889    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.050889    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.055201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:50.085770    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.085770    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.090316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:50.123762    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.123762    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.127529    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:50.157626    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.157626    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.163652    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:50.189945    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.189945    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.193620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:50.222819    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.222866    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.227818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:50.256909    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.256909    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.260970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:50.290387    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.290387    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.290387    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:50.290387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.357876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.357876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.420098    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.420098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.460376    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.460376    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.542989    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.542989    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:50.542989    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:50.570331    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:50.645772    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:50.645772    6296 retry.go:31] will retry after 16.344343138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:53.075519    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.098924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:53.131675    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.131675    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.135542    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:53.166511    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.166511    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.170265    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:53.198547    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.198547    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.202694    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:53.232459    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.232459    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.235758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:53.263802    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.263802    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.268318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:53.296956    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.296956    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.301349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:53.331331    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.331331    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.335255    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:53.367520    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.367550    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.367577    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.367602    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.453750    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.453837    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:53.453887    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:53.485058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.485058    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.540050    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.540050    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.604101    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.604101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.146858    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.172227    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:56.203897    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.203941    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.207562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:56.236114    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.236114    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.240341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:56.274958    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.274958    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.280577    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:56.308906    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.308906    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.312811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:56.340777    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.340836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.343843    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:56.371408    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.371441    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.374771    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:56.406487    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.406487    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.410973    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:56.441247    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.441247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.441247    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.441247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.506877    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.506877    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.548841    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.548841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.633101    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.633101    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:56.633101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:56.659421    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.659457    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:57.892877    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:57.970838    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:57.970838    6296 retry.go:31] will retry after 27.385193451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.728649    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:58.834139    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.834680    6296 retry.go:31] will retry after 32.13321777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:59.213728    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.238361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:59.266298    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.266298    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.270295    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:59.299414    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.299414    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.302581    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:59.335627    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.335627    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.339238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:59.367042    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.367042    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.371258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:59.401507    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.401507    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.405468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:59.436657    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.436657    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.440955    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:59.471027    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.471027    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.474047    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:59.505164    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.505164    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.505164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:59.505164    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:59.533835    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.533835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.586695    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.587671    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.648841    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.648841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.688691    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.688691    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.777044    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.282707    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.307570    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:02.340326    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.340412    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.343993    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:02.374035    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.374079    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.377688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:02.409724    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.409724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.414154    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:02.442993    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.442993    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.447591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:02.474966    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.474966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.479447    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:02.511675    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.511675    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.515939    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:02.544034    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.544034    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.548633    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:02.578196    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.578196    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.578196    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.578196    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.642449    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.643420    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:02.681562    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.681562    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.766017    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.766017    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:02.766017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:02.795166    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.795166    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.347132    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.372840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:05.424611    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.424686    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.428337    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:05.461682    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.461682    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.465790    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:05.495395    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.495395    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.499215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:05.528620    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.528620    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.532226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:05.560375    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.560375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.564119    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:05.595214    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.595214    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.600088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:05.633183    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.633183    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.636776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:05.664840    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.664840    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.664840    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.664840    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.718503    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.718503    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.781489    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.781489    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.821081    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.821081    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.905451    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.905451    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:05.905451    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:06.996471    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:09:07.077056    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:07.077056    6296 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:08.443326    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.470285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:08.499191    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.499191    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.503346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:08.531727    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.531727    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.535874    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:08.567724    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.567724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.571504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:08.601814    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.601814    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.605003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:08.638738    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.638815    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.642116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:08.672949    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.672949    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.676953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:08.706081    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.706145    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.709298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:08.737856    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.737856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.737856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.737856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.798236    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.798236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.838053    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.838053    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.925271    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.925271    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:08.925271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:08.952860    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.952934    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.505032    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.532273    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:11.560855    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.560907    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.564808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:11.595967    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.596024    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.599911    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:11.628443    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.628443    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.632103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:11.659899    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.659899    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.663896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:11.695830    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.695864    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.699333    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:11.728245    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.728314    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.731834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:11.762004    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.762038    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.765497    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:11.800437    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.800437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.800437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.800437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.850659    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.850659    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:11.927328    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.927328    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.968115    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.968115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:12.061366    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:12.061366    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:12.061366    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:14.593463    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.619698    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:14.649625    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.649625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.653809    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:14.682807    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.682865    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.686225    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:14.716867    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.716867    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.720947    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:14.748712    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.748712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.753598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:14.786467    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.786467    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.790745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:14.820388    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.820388    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.824364    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:14.856683    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.856715    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.860387    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:14.907334    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.907388    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.907388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.907388    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.970536    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.971543    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:15.009837    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:15.009837    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:15.100833    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:15.100833    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:15.100833    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:15.129774    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:15.129838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.687506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.711884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:17.740676    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.740676    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.743807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:17.775526    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.775598    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.779196    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:17.810564    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.810564    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.815366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:17.847149    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.847149    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.850304    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:17.880825    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.880825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.884416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:17.913663    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.913663    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.917519    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:17.949675    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.949736    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.953399    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:17.981777    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.981777    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.981853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.981853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:18.045143    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:18.045143    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:18.085682    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:18.085682    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:18.174824    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:18.174862    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:18.174890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:18.201721    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:18.201721    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:20.754573    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.779418    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:20.815289    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.815336    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.821329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:20.849494    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.849566    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.853416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:20.886139    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.886213    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.890864    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:20.921623    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.921691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.925413    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:20.955001    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.955030    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.959115    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:20.986446    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.986446    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.990622    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:21.019381    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.019903    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:21.023386    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:21.049708    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.049708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:21.049708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:21.049708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:21.114512    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:21.114512    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:21.154312    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:21.154312    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:21.241835    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:21.241835    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:21.241835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:21.269935    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:21.269935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:23.827385    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.851293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:23.884017    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.884017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.887852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:23.920819    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.920819    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.925124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:23.953397    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.953468    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.957090    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:23.987965    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.987965    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.992238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:24.021188    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.021188    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:24.027472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:24.059066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.059066    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:24.062927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:24.092066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.092066    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:24.096083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:24.130020    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.130083    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:24.130083    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:24.130083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:24.193264    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:24.193264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:24.233590    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:24.233590    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:24.334738    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:24.334738    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:24.334738    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:24.361711    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:24.361711    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:25.361736    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:09:25.443830    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:25.443830    6296 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:26.915928    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.940552    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:26.972265    6296 logs.go:282] 0 containers: []
	W1217 02:09:26.972334    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.975468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:27.004131    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.004131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:27.007688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:27.040755    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.040755    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:27.044298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:27.075607    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.075607    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:27.079764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:27.109726    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.109726    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:27.113807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:27.142060    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.142060    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:27.145049    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:27.179827    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.179898    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:27.183340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:27.212340    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.212340    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:27.212340    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:27.212340    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:27.290453    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:27.290453    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:27.290453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:27.317919    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:27.317919    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:27.372636    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:27.372636    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:27.434881    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:27.434881    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.980965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:30.007081    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:30.038766    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.038766    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:30.042837    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:30.074216    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.074277    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:30.077495    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:30.109815    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.109815    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:30.113543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:30.144692    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.144692    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:30.148595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:30.181530    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.181530    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:30.185056    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:30.230054    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.230054    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:30.233965    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:30.264421    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.264421    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:30.268191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:30.302463    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.302463    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:30.302463    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:30.302463    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:30.369905    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:30.369905    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:30.407364    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:30.407364    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:30.501045    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:30.501045    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:30.501045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:30.529058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:30.529119    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:30.973740    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:09:31.053832    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:31.053832    6296 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:31.057712    6296 out.go:179] * Enabled addons: 
	I1217 02:09:31.060716    6296 addons.go:530] duration metric: took 1m41.3245326s for enable addons: enabled=[]
	I1217 02:09:33.093000    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:33.117479    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:33.148299    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.148299    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:33.152403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:33.180747    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.180747    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:33.184258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:33.214319    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.214389    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:33.217921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:33.244463    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.244463    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:33.248882    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:33.280520    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.280573    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:33.284251    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:33.313836    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.313883    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:33.318949    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:33.351545    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.351545    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:33.355242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:33.384638    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.384638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:33.384638    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:33.384638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:33.438624    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:33.438624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:33.503148    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:33.504145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:33.542770    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:33.542770    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:33.628872    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:33.628872    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:33.628872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:36.163766    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:36.190660    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:36.219485    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.219485    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:36.223169    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:36.253826    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.253826    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:36.257584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:36.289684    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.289684    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:36.293455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:36.321228    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.321228    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:36.326076    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:36.355893    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.355893    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:36.360432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:36.392307    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.392359    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:36.395377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:36.427797    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.427797    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:36.431432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:36.465462    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.465547    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:36.465590    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:36.465605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:36.515585    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:36.515688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:36.577828    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:36.577828    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:36.617923    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:36.617923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:36.706865    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:36.706865    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:36.706865    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.240583    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:39.269426    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:39.300548    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.300548    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:39.304455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:39.337640    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.337640    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:39.341427    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:39.375280    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.375280    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:39.379328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:39.408206    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.408291    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:39.413138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:39.439760    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.439760    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:39.443728    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:39.470865    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.471120    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:39.477630    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:39.510101    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.510101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:39.515759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:39.545423    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.545494    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:39.545494    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:39.545559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.574474    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:39.574474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:39.627410    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:39.627410    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:39.687852    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:39.687852    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:39.730823    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:39.730823    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:39.820771    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.326489    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:42.349989    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:42.381673    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.381673    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:42.385392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:42.414575    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.414575    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:42.418510    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:42.452120    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.452120    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:42.456157    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:42.484625    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.484625    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:42.487782    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:42.520235    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.520235    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:42.525546    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:42.558589    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.558589    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:42.561770    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:42.592364    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.592364    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:42.596368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:42.625522    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.625522    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:42.625522    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:42.625522    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:42.661616    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:42.661616    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:42.748046    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.748046    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:42.748046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:42.778854    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:42.778854    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:42.827860    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:42.827860    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.394220    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:45.418501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:45.453084    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.453132    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:45.457433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:45.491679    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.491679    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:45.495517    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:45.524934    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.524934    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:45.528788    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:45.559787    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.559837    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:45.563714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:45.608019    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.608104    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:45.612132    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:45.639869    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.639869    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:45.644002    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:45.671767    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.671767    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:45.675466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:45.704056    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.704104    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:45.704104    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:45.704104    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.766557    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:45.766557    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:45.807449    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:45.807449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:45.898686    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:45.898686    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:45.898686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:45.924614    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:45.924614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.482563    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:48.510137    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:48.546063    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.546063    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:48.551905    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:48.588536    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.588617    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:48.592628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:48.621540    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.621540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:48.625701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:48.653505    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.653505    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:48.659485    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:48.688940    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.689008    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:48.692649    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:48.718858    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.718858    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:48.722907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:48.752451    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.752451    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:48.755913    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:48.785865    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.785903    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:48.785903    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:48.785948    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.842730    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:48.843261    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:48.905352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:48.905352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:48.945271    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:48.945271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.027913    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.027963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:49.027963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:51.563182    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:51.587223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:51.619597    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.619621    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:51.623355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:51.652069    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.652152    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:51.655716    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:51.684602    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.684653    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:51.687735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:51.716327    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.716327    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:51.720054    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:51.750202    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.750266    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:51.753821    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:51.781863    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.781863    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:51.785648    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:51.814791    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.814841    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:51.818565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:51.850654    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.850654    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:51.850654    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:51.850654    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:51.912429    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:51.912429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:51.951795    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:51.951795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.035486    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.035486    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:52.035486    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:52.063472    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.063472    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:54.631678    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:54.657392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:54.689037    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.689037    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:54.692460    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:54.723231    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.723231    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:54.729158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:54.759168    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.759168    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:54.762883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:54.792371    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.792371    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:54.796165    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:54.828375    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.828375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:54.832201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:54.862409    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.862476    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:54.866107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:54.897161    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.897161    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:54.900834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:54.947452    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.947452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:54.947452    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:54.947452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.016411    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.016411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.055628    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.055628    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.152557    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.152599    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:55.152599    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:55.180492    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.180492    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:57.741989    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:57.768328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:57.799200    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.799200    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:57.803065    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:57.832042    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.832042    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:57.835921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:57.863829    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.863891    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:57.867347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:57.896797    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.896822    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:57.900369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:57.929832    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.929907    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:57.933326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:57.960278    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.960278    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:57.964215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:57.992277    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.992324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:57.995951    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:58.026155    6296 logs.go:282] 0 containers: []
	W1217 02:09:58.026254    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.026254    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.026303    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.091999    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.091999    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.131520    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.131520    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.226831    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.226831    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:58.226831    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:58.256592    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.256635    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:00.809919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:00.842222    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:00.872955    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.872955    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:00.876666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:00.906031    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.906031    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:00.909593    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:00.939873    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.939946    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:00.943346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:00.972609    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.972643    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:00.975886    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:01.005269    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.005269    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.009766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:01.041677    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.041677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.048361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:01.081235    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.081312    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.084849    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:01.113437    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.113437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.113437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.113437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.160067    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.160624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.225071    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.225071    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.265307    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.265307    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.348506    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.348535    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:01.348571    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:03.891628    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:03.925404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:03.965688    6296 logs.go:282] 0 containers: []
	W1217 02:10:03.965688    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:03.968982    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:04.006348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.006348    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.009769    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:04.039968    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.039968    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.044404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:04.078472    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.078472    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.081894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:04.113348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.113348    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.117138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:04.148885    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.148885    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.152756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:04.181559    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.181616    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.185351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:04.217017    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.217017    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.217017    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.217017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.284540    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.284540    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.324402    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.324402    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.409943    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:04.409943    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:04.409943    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:04.438771    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.438771    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:06.997897    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.024185    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:07.054915    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.055512    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.060167    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:07.089778    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.089778    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.093773    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:07.124641    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.124641    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.128016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:07.154834    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.154915    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.158505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:07.188568    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.188568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.192962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:07.225078    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.225078    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.228699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:07.258599    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.258659    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.262590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:07.291623    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.291623    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.291623    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:07.291623    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:07.322611    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.322611    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.374970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.374970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.438795    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.438795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:07.479442    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.479442    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.566162    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.072312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.096505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:10.125617    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.125617    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.129377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:10.157921    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.157921    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.161850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:10.191705    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.191705    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.196003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:10.224412    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.224482    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.229368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:10.258140    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.258140    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.261205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:10.292047    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.292047    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.296511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:10.325818    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.325818    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.329752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:10.359454    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.359530    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.359530    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.359530    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:10.413970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.413970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.476665    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.476665    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.516335    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.516335    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.602353    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.602353    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:10.602353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.134148    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:13.159720    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:13.191534    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.191534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.195626    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:13.230035    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.230035    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.233817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:13.266476    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.266476    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.270598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:13.305852    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.305852    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.310349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:13.341805    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.341867    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.345346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:13.377945    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.377945    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.381659    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:13.411885    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.411957    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.416039    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:13.446642    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.446642    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.446642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.446642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.487083    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.487083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.574632    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.574632    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:13.574632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.604181    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.604702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:13.660020    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.660020    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.225038    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:16.248922    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:16.280247    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.280247    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:16.284285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:16.312596    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.312596    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:16.316952    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:16.345108    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.345108    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:16.348083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:16.377403    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.377403    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.380619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:16.410555    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.410555    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.414048    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:16.446454    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.446454    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.449405    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:16.478967    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.478967    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.484108    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:16.516422    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.516422    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.516422    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.516422    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.580305    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.580305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.618663    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.618663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.705105    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.705105    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:16.705105    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:16.732046    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.732046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:19.284431    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:19.307909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:19.340842    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.340842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:19.344830    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:19.371150    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.371150    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:19.374863    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:19.403216    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.403216    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:19.406907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:19.433979    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.433979    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:19.438046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:19.469636    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.469636    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:19.473675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:19.504296    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.504296    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.508671    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:19.535932    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.535932    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.539707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:19.567355    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.567416    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.567416    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.567416    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.629876    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.629876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.678547    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.678547    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.785306    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.785306    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:19.785371    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:19.813137    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.813137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.369643    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:22.396731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:22.431018    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.431018    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:22.434688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:22.463307    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.463307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:22.467323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:22.497065    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.497065    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:22.500574    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:22.531497    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.531564    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:22.535088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:22.563706    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.563779    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:22.567344    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:22.602516    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.602597    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:22.606242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:22.637637    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.637699    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:22.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:22.668078    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.668078    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.668078    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.668078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.754963    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.754963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:22.754963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:22.783172    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.783222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.840048    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.840048    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.900137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.900137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.445900    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:25.472646    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:25.502929    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.502929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:25.506274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:25.537721    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.537721    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:25.543044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:25.572924    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.572924    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:25.576391    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:25.607737    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.607798    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:25.611457    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:25.644967    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.645041    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:25.648690    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:25.677801    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.677801    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:25.681530    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:25.709148    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.709148    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:25.715667    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:25.746892    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.746892    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:25.746892    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:25.746892    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:25.796336    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:25.796336    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.862353    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.862353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.902100    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.902100    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.988926    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.988926    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:25.988926    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:28.523475    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:28.549366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:28.580055    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.580055    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:28.583822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:28.615168    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.615168    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:28.618724    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:28.650344    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.650368    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:28.654014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:28.704033    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.704033    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:28.707699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:28.738871    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.738938    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:28.743270    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:28.775432    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.775432    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:28.779176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:28.810234    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.810351    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:28.814357    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:28.845783    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.845783    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:28.845783    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:28.845783    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:28.902626    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:28.902626    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.963758    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.963758    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:29.002141    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:29.002141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:29.104674    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:29.104674    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:29.104674    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:31.640270    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:31.668862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:31.703099    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.703099    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:31.706355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:31.737408    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.737408    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:31.741549    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:31.771462    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.771549    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:31.775645    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:31.803600    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.803600    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:31.807313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:31.835884    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.835884    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:31.840000    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:31.870518    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.870518    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:31.877548    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:31.905387    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.905387    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:31.909722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:31.938258    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.938284    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:31.938284    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:31.938284    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:32.000115    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:32.000115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:32.039351    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:32.039351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:32.128849    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:32.128849    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:32.128849    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:32.155670    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:32.155670    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:34.707099    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:34.732689    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:34.763625    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.763625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:34.767349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:34.797435    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.797435    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:34.801415    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:34.828785    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.828785    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:34.832654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:34.864748    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.864748    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:34.868392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:34.896365    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.896365    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:34.900474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:34.932681    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.932681    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:34.936571    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:34.966056    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.966056    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:34.969208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:34.998362    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.998362    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:34.998362    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:34.998362    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:35.036977    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:35.036977    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:35.134841    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:35.134841    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:35.134841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:35.162429    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:35.162429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:35.213960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:35.214015    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:37.779857    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:37.806799    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:37.840730    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.840730    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:37.846443    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:37.875504    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.875504    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:37.879215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:37.910068    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.910068    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:37.913551    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:37.942897    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.942897    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:37.946741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:37.978321    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.978321    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:37.982267    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:38.008421    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.008421    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:38.013043    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:38.043041    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.043041    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:38.049737    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:38.082117    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.082117    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:38.082117    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:38.082117    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:38.148970    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:38.148970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:38.189697    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:38.189697    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:38.276122    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:38.276122    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:38.276122    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:38.304355    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:38.304355    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:40.862712    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:40.889041    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:40.921169    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.921169    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:40.924297    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:40.956313    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.956356    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:40.960294    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:40.990144    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.990144    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:40.993876    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:41.026732    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.026803    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:41.030745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:41.073825    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.073825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:41.078152    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:41.105859    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.105859    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:41.111714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:41.143286    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.143324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:41.146776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:41.176314    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.176345    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:41.176345    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:41.176345    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:41.213266    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:41.213266    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:41.300305    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:41.300305    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:41.300305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:41.328560    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:41.328621    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:41.375953    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:41.375953    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:43.941613    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:43.967455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:44.000199    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.000199    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:44.003568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:44.035058    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.035058    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:44.040590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:44.083687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.083687    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:44.087476    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:44.115776    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.115776    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:44.119318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:44.155471    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.155513    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:44.159433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:44.191599    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.191636    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:44.195145    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:44.228181    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.228211    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:44.231971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:44.259687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.259763    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:44.259763    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:44.259763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:44.323705    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:44.323705    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:44.365401    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:44.365401    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:44.453893    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:44.453893    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:44.453893    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:44.480694    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:44.480694    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.042501    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:47.067663    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:47.108433    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.108433    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:47.112206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:47.144336    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.144336    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:47.148449    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:47.182968    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.183049    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:47.186614    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:47.215738    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.215738    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:47.219595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:47.248444    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.248511    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:47.252434    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:47.280975    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.280975    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:47.284966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:47.317178    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.317178    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:47.321223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:47.352638    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.352638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:47.352638    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:47.352638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:47.390049    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:47.390049    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:47.479425    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:47.479425    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:47.479425    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:47.505331    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:47.505331    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.556431    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:47.556431    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.124255    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:50.151100    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:50.184499    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.184565    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:50.187696    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:50.221764    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.221764    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:50.225471    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:50.253823    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.253823    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:50.260470    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:50.289768    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.289815    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:50.295283    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:50.321597    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.321597    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:50.325774    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:50.356707    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.356707    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:50.360685    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:50.390099    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.390099    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:50.393971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:50.420950    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.420950    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:50.420950    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:50.420950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.484730    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:50.484730    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:50.523997    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:50.523997    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:50.618256    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:50.618256    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:50.618256    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:50.645077    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:50.645077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:53.200622    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:53.223348    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:53.253589    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.253589    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:53.258688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:53.287647    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.287689    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:53.291555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:53.324358    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.324403    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:53.327650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:53.355417    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.355417    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:53.359780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:53.390012    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.390012    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:53.393536    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:53.420636    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.420672    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:53.424429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:53.453665    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.453744    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:53.456764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:53.486769    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.486836    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:53.486875    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:53.486875    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:53.552513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:53.552513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:53.593054    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:53.593054    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:53.683171    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:53.683207    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:53.683230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:53.712513    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:53.712513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:56.288600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:56.314380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:56.347447    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.347447    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:56.351158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:56.381779    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.381779    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:56.385232    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:56.423000    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.423000    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:56.427083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:56.456635    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.456635    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:56.460509    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:56.490868    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.490868    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:56.496594    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:56.523671    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.523671    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:56.527847    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:56.559992    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.559992    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:56.565352    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:56.591708    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.591708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:56.591708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:56.591708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:56.656572    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:56.656572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:56.696334    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:56.696334    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:56.788411    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:56.788411    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:56.788411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:56.815762    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:56.815762    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.370676    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:59.404615    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:59.440735    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.440735    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:59.446758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:59.475209    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.475209    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:59.479521    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:59.509465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.509465    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:59.513228    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:59.542409    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.542409    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:59.546008    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:59.575778    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.575778    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:59.579759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:59.613465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.613465    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:59.617266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:59.645245    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.645245    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:59.649170    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:59.680413    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.680449    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:59.680449    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:59.680449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:59.713987    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:59.713987    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.764930    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:59.764994    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:59.832077    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:59.832077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:59.870681    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:59.870681    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:59.953336    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.457745    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:02.492666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:02.526665    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.526665    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:02.530862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:02.560353    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.560413    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:02.564099    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:02.595430    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.595430    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:02.599884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:02.629744    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.629744    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:02.633637    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:02.662623    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.662623    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:02.666817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:02.694696    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.694696    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:02.698194    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:02.727384    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.727442    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:02.731483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:02.766114    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.766114    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:02.766114    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:02.766114    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:02.830755    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:02.830755    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:02.870216    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:02.870216    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:02.958327    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.958327    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:02.958380    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:02.984980    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:02.984980    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.540158    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:05.564812    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:05.595638    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.595638    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:05.599748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:05.628748    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.628748    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:05.632878    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:05.666232    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.666257    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:05.670293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:05.699654    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.699654    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:05.703004    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:05.733113    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.733113    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:05.737096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:05.765591    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.765639    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:05.770398    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:05.796360    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.796360    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:05.800240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:05.829847    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.829914    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:05.829914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:05.829945    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.880789    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:05.880789    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:05.943002    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:05.943002    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:05.983389    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:05.983389    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.076023    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.076023    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:06.076023    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:08.608606    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:08.632215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:08.665017    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.665017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:08.669299    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:08.695355    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.695355    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:08.699306    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:08.729054    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.729054    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:08.732454    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:08.759881    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.759881    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:08.764328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:08.793695    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.793777    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:08.797908    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:08.826225    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.826225    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:08.829679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:08.859645    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.859645    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:08.863083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:08.893657    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.893657    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:08.893657    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:08.893657    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:08.958163    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:08.958163    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:08.997418    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:08.997418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.087973    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.087973    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:09.087973    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:09.115687    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.115687    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:11.697770    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.725676    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:11.758809    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.758809    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:11.762929    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:11.794198    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.794198    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:11.798023    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:11.828890    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.828890    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:11.833358    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:11.865217    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.865217    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:11.868915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:11.897672    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.897672    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:11.901235    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:11.931725    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.931808    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:11.935264    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:11.966263    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.966263    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:11.970422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:11.999856    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.999856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:11.999856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:11.999856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.064137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.064137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.102491    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.102491    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.183568    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.183568    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:12.183568    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:12.212178    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.212178    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:14.772821    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.797656    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:14.826900    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.826900    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.829894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:14.859202    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.859202    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.862783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:14.891414    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.891414    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.895052    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:14.925404    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.925404    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:14.928966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:14.959295    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.959330    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:14.962893    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:14.991696    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.991730    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:14.994776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:15.025468    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.025468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.031674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:15.060661    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.060661    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.060733    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.060733    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.120513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.120513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.159608    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.159608    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.244418    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.244418    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:15.244418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:15.271288    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.271288    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.830556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.850600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:17.886696    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.886696    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.890674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:17.921702    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.921702    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.924697    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:17.952692    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.952692    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.956701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:17.984691    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.984691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.988655    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:18.024626    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.024663    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:18.028558    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:18.060310    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.060310    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.064024    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:18.100124    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.100124    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.104105    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:18.141223    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.141223    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.141223    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.141223    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.179686    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.179686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.311240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.311240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:18.311240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:18.342566    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.342615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.393872    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.393872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.977693    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:21.006733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:21.035136    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.035201    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:21.039202    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:21.069636    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.069636    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:21.075448    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:21.105437    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.105437    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:21.108735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:21.136602    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.136602    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:21.140124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:21.168674    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.168674    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:21.172368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:21.204723    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.204723    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:21.208123    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:21.237130    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.237130    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:21.240654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:21.268170    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.268170    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:21.268170    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:21.268170    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:21.333642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:21.333642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:21.372230    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:21.372230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:21.467012    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:21.467012    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:21.467012    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:21.495867    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:21.495867    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.053568    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:24.079587    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:24.110362    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.110399    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:24.113326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:24.141818    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.141818    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:24.145313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:24.172031    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.172031    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:24.176197    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:24.205114    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.205133    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:24.208437    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:24.238244    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.238244    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:24.242692    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:24.271687    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.271687    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:24.276384    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:24.307922    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.307922    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:24.311538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:24.350108    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.350108    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:24.350108    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:24.350108    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.402159    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:24.402224    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:24.463824    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:24.463824    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:24.503645    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:24.503645    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:24.591969    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:24.591969    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:24.591969    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.123965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:27.157839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:27.199991    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.199991    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:27.204206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:27.231981    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.231981    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:27.235568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:27.265668    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.265668    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:27.269162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:27.299488    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.299488    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:27.303277    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:27.335769    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.335769    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:27.339516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:27.369112    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.369112    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:27.372881    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:27.402031    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.402031    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:27.405780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:27.436610    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.436610    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:27.436610    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:27.436610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:27.523394    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:27.523917    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:27.523957    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.552476    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:27.552476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:27.607026    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:27.607078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:27.670834    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:27.670834    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.216027    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:30.241711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:30.272275    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.272275    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:30.276071    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:30.304635    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.304635    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:30.307639    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:30.340374    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.340374    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:30.343758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:30.374162    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.374162    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:30.378010    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:30.407836    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.407836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:30.411411    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:30.440002    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.440002    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:30.443429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:30.472647    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.472647    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:30.476538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:30.510744    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.510744    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:30.510744    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:30.510744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:30.575069    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:30.575156    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:30.639732    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:30.640731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.685195    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:30.685195    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:30.775246    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:30.775295    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:30.775295    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.308109    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:33.334329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:33.365061    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.365061    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:33.370854    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:33.399488    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.399488    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:33.406335    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:33.436434    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.436434    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:33.439783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:33.468947    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.468947    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:33.474014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:33.502568    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.502568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:33.506146    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:33.535706    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.535706    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:33.540016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:33.573811    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.573811    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:33.577712    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:33.606321    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.606321    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:33.606321    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:33.606321    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:33.671884    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:33.671884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:33.712095    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:33.712095    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:33.800767    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:33.800848    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:33.800884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.829402    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:33.829474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:36.410236    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:36.438912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:36.468229    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.468229    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:36.472231    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:36.501220    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.501220    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:36.506462    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:36.539556    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.539556    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:36.543603    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:36.584367    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.584367    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:36.588513    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:36.620670    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.620670    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:36.626030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:36.654239    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.654239    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:36.658962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:36.689023    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.689023    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:36.693754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:36.721351    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.721351    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:36.721351    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:36.721351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:36.787832    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:36.787832    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:36.828019    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:36.828019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:36.916923    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:36.916923    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:36.916923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:36.946231    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:36.946265    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.498459    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:39.522909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:39.553462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.553462    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:39.557524    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:39.585462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.585462    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:39.591342    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:39.619332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.619399    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:39.623096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:39.651071    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.651071    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:39.654766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:39.683502    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.683502    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:39.687390    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:39.715332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.715332    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:39.718932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:39.749019    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.749019    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:39.752739    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:39.783378    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.783378    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:39.783378    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:39.783378    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.835019    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:39.835019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:39.899542    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:39.899542    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:39.938717    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:39.938717    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:40.026359    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:40.026403    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:40.026446    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:42.561805    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:42.585507    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:42.613091    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.613091    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:42.616991    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:42.647608    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.647608    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:42.651380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:42.680540    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.680540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:42.683625    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:42.717014    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.717014    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:42.721369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:42.750017    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.750017    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:42.753961    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:42.785164    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.785164    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:42.788883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:42.817424    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.817424    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:42.821266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:42.853247    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.853247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:42.853247    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:42.853247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:42.910034    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:42.910052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:42.970436    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:42.970436    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:43.009833    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:43.010830    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:43.102803    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:43.102803    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:43.102803    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.636418    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:45.661677    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:45.695141    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.695141    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:45.699189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:45.729376    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.729376    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:45.733753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:45.764365    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.764365    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:45.767917    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:45.799287    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.799287    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:45.802968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:45.835270    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.835270    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:45.838766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:45.868660    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.868660    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:45.875727    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:45.903566    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.903566    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:45.907562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:45.937452    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.937452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:45.937452    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:45.937452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.965091    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:45.965091    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:46.013173    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:46.013173    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:46.077113    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:46.077113    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:46.118527    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:46.118527    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:46.207662    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:48.714055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:48.741412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:48.772767    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.772767    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:48.776092    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:48.804946    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.805020    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:48.808538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:48.837488    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.837488    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:48.840453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:48.871139    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.871139    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:48.875518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:48.904264    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.904264    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:48.911351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:48.939118    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.939118    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:48.943340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:48.970934    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.970934    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:48.974990    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:49.005140    6296 logs.go:282] 0 containers: []
	W1217 02:11:49.005174    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:49.005205    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:49.005234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:49.075925    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:49.075925    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:49.116144    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:49.116144    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:49.196968    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:49.197074    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:49.197074    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:49.222883    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:49.223404    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:51.783312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:51.809151    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:51.839751    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.839751    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:51.844016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:51.895178    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.895178    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:51.899341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:51.930311    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.930311    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:51.933797    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:51.961857    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.961857    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:51.966036    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:51.993647    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.993647    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:51.997672    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:52.026485    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.026485    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:52.032726    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:52.062039    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.062039    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:52.066379    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:52.096772    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.096772    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:52.096772    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:52.096772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:52.163369    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:52.163369    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:52.203719    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:52.203719    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:52.295324    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:52.295324    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:52.295324    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:52.323234    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:52.323234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:54.878824    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:54.907441    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:54.944864    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.944864    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:54.948030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:54.980769    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.980769    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:54.987506    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:55.019726    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.019726    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:55.024226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:55.052618    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.052618    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:55.056658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:55.085528    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.085607    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:55.089212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:55.120453    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.120525    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:55.124591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:55.154725    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.154749    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:55.157707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:55.187692    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.187692    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:55.187692    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:55.187692    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:55.252848    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:55.252848    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:55.318197    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:55.318197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:55.358145    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:55.358145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:55.439213    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:55.439213    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:55.439744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:57.972346    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:57.997412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:58.029794    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.029794    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:58.033582    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:58.064729    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.064729    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:58.068722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:58.103854    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.103854    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:58.107069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:58.140767    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.140767    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:58.145080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:58.172792    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.172792    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:58.177038    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:58.205809    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.205809    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:58.209371    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:58.236353    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.236353    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:58.240621    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:58.269469    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.269469    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:58.269469    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:58.269469    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:58.324960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:58.324960    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:58.384708    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:58.384708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:58.423476    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:58.423476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:58.512328    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:58.512387    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:58.512387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.044354    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:01.073699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:01.104765    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.104836    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:01.107915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:01.141131    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.141131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:01.145209    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:01.174536    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.174536    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:01.178187    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:01.209172    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.209172    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:01.212803    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:01.241435    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.241486    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:01.245545    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:01.277115    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.277115    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:01.281366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:01.312158    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.312158    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:01.316725    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:01.343220    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.343220    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:01.343220    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:01.343220    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:01.382233    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:01.382233    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:01.487570    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:01.488578    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:01.488578    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.514572    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:01.514572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:01.567754    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:01.567754    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.140604    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:04.165376    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:04.197379    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.197379    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:04.202896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:04.231436    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.231506    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:04.235354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:04.267960    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.267960    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:04.271789    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:04.301108    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.301108    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:04.305219    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:04.334515    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.334515    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:04.338693    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:04.366071    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.366071    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:04.369958    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:04.398457    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.398457    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:04.405087    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:04.432495    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.432495    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:04.432495    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:04.432495    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.492454    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:04.492454    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:04.530878    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:04.530878    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:04.615739    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:04.615739    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:04.615739    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:04.643270    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:04.643304    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.195429    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:07.221998    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:07.254842    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.254842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:07.258578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:07.291820    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.291820    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:07.297979    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:07.329603    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.329603    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:07.334181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:07.363276    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.363324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:07.367248    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:07.394630    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.394695    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:07.398679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:07.425998    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.425998    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:07.429814    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:07.458824    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.458878    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:07.462682    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:07.490543    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.490614    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:07.490614    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:07.490614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:07.575806    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:07.575806    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:07.576816    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:07.607910    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:07.607910    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.659155    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:07.659155    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:07.722240    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:07.722240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.270711    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:10.295753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:10.324920    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.324920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:10.328903    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:10.358180    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.358218    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:10.362249    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:10.390135    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.390135    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:10.393738    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:10.423058    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.423090    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:10.426534    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:10.456745    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.456745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:10.463439    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:10.493765    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.493765    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:10.497858    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:10.526425    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.526425    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:10.532217    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:10.563338    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.563338    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:10.563338    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:10.563338    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:10.627669    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:10.627669    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.666455    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:10.666455    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:10.755613    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:10.755613    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:10.755613    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:10.786516    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:10.787045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:13.342631    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:13.368870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:13.402304    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.402347    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:13.408012    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:13.436633    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.436710    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:13.439877    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:13.468754    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.469007    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:13.473752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:13.505247    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.505324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:13.509766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:13.538745    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.538745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:13.542743    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:13.571986    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.571986    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:13.575522    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:13.604002    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.604002    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:13.608063    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:13.636028    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.636028    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:13.636028    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:13.636028    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:13.701418    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:13.701418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:13.740729    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:13.740729    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:13.830687    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:13.830746    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:13.830768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:13.856732    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:13.856732    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.415071    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:16.441827    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:16.474920    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.474920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:16.478560    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:16.509149    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.509149    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:16.512927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:16.544114    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.544114    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:16.547867    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:16.578111    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.578111    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:16.581776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:16.610586    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.610586    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:16.614807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:16.644103    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.644103    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:16.647954    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:16.692289    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.692289    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:16.696153    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:16.727229    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.727229    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:16.727229    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:16.727229    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:16.823236    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:16.823236    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:16.823236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:16.849827    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:16.849827    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.905388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:16.905414    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:16.965153    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:16.965153    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.511192    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:19.537347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:19.568920    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.568920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:19.573318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:19.604587    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.604587    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:19.608244    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:19.637707    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.637732    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:19.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:19.669047    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.669047    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:19.672932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:19.703243    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.703243    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:19.706862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:19.738948    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.738948    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:19.742483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:19.773620    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.773620    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:19.777766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:19.807218    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.807218    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:19.807218    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:19.807218    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:19.872750    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:19.872750    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.912835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:19.912835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:19.997398    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:19.997398    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:19.997398    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:20.025629    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:20.025629    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:22.593289    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:22.619754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:22.652929    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.652929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:22.657635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:22.689768    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.689846    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:22.693504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:22.720087    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.720087    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:22.723840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:22.752902    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.752959    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:22.757109    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:22.787369    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.787369    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:22.791584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:22.822117    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.822117    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:22.825675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:22.856022    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.856022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:22.859609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:22.886982    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.886982    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:22.886982    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:22.886982    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:22.972988    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:22.972988    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:22.972988    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:23.002037    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:23.002037    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:23.061548    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:23.061548    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:23.124352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:23.124352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:25.670974    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:25.706279    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:25.741150    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.741150    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:25.745079    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:25.773721    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.773782    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:25.779777    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:25.808516    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.808516    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:25.813011    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:25.844755    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.844755    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:25.848591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:25.877332    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.877332    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:25.881053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:25.907973    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.907973    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:25.914424    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:25.941138    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.941138    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:25.945025    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:25.974760    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.974760    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:25.974760    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:25.974760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:26.012354    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:26.012354    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:26.113177    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:26.113177    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:26.113177    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:26.144162    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:26.144245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:26.194605    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:26.195138    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:28.763811    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:28.789762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:28.820544    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.820544    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:28.824807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:28.855728    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.855728    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:28.860354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:28.894655    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.894655    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:28.898069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:28.928310    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.928394    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:28.932124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:28.967209    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.967209    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:28.973126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:29.002975    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.003024    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:29.006839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:29.044805    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.044881    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:29.049158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:29.078108    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.078142    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:29.078174    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:29.078202    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:29.142751    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:29.142751    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:29.182082    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:29.182082    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:29.271566    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:29.271596    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:29.271643    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:29.299332    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:29.299332    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:31.856743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:31.882741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:31.912323    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.912372    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:31.917046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:31.948587    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.948631    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:31.952095    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:31.981682    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.981682    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:31.985888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:32.022173    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.022173    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:32.026061    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:32.070026    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.070026    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:32.074016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:32.105255    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.105255    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:32.109062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:32.140873    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.140947    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:32.143941    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:32.172848    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.172876    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:32.172876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:32.172876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:32.237207    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:32.237207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:32.275838    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:32.275838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:32.360656    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:32.360656    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:32.360656    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:32.391099    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:32.391099    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:34.970955    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:35.002200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:35.036658    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.036658    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:35.041208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:35.068998    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.068998    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:35.075758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:35.105253    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.105253    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:35.109356    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:35.137411    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.137411    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:35.141289    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:35.168542    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.168542    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:35.174717    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:35.204677    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.204677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:35.209675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:35.240901    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.240901    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:35.244034    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:35.276453    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.276453    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:35.276453    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:35.276453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:35.341158    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:35.341158    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:35.381822    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:35.381822    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:35.472890    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:35.472890    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:35.472890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:35.501374    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:35.501374    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.054644    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:38.080787    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:38.112397    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.112420    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:38.116070    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:38.144341    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.144396    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:38.148080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:38.177159    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.177159    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:38.181253    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:38.210000    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.210000    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:38.215709    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:38.243526    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.243526    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:38.247620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:38.278443    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.278443    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:38.282504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:38.314414    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.314414    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:38.317968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:38.345306    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.345306    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:38.345306    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:38.345412    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:38.425240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:38.425240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:38.425240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:38.455129    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:38.455129    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.514775    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:38.514775    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:38.574255    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:38.574255    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.116537    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:41.139650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:41.169726    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.169814    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:41.173285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:41.204812    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.204812    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:41.208892    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:41.235980    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.235980    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:41.240200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:41.271415    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.271415    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:41.275005    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:41.303967    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.303967    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:41.309707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:41.340401    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.340401    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:41.343688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:41.374008    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.374008    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:41.377325    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:41.409502    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.409563    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:41.409563    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:41.409610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:41.472168    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:41.472168    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.513098    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:41.513098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:41.601716    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:41.601716    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:41.601716    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:41.629092    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:41.629148    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:44.185012    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:44.210566    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:44.242274    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.242274    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:44.248762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:44.280241    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.280307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:44.283818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:44.312929    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.312997    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:44.316643    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:44.343840    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.343840    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:44.347619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:44.378547    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.378547    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:44.382595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:44.410908    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.410908    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:44.414686    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:44.448329    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.448329    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:44.453888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:44.484842    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.484842    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:44.484842    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:44.484842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:44.550740    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:44.550740    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:44.589666    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:44.589666    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:44.677625    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:44.677625    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:44.677625    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:44.706051    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:44.706051    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:47.257477    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:47.286845    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:47.315563    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.315563    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:47.319220    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:47.351319    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.351319    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:47.354946    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:47.382237    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.382237    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:47.386106    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:47.415608    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.415608    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:47.419575    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:47.449212    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.449241    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:47.452978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:47.482356    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.482356    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:47.486511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:47.518156    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.518205    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:47.522254    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:47.550631    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.550631    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:47.550631    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:47.550727    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:47.615950    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:47.615950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:47.655928    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:47.655928    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:47.744126    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:47.744164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:47.744210    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:47.773502    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:47.773502    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.331328    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:50.368555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:50.407443    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.407443    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:50.411798    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:50.440520    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.440544    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:50.444430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:50.478050    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.478050    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:50.481848    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:50.513603    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.513658    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:50.517565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:50.551935    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.552946    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:50.556641    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:50.591171    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.591171    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:50.594981    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:50.624821    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.624821    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:50.628756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:50.661209    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.661209    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:50.661209    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:50.661209    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:50.693141    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:50.693141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.746322    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:50.746322    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:50.805974    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:50.805974    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:50.844572    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:50.844572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:50.935133    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.441690    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:53.466017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:53.494846    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.494846    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:53.499634    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:53.530839    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.530839    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:53.534661    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:53.567189    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.567189    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:53.571412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:53.598763    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.598763    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:53.602673    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:53.629791    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.629791    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:53.632953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:53.662323    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.662323    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:53.665394    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:53.695745    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.695745    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:53.701403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:53.735348    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.735348    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:53.735348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:53.735348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:53.816532    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.816532    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:53.816532    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:53.843453    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:53.843453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:53.893853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:53.893853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:53.954759    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:53.954759    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.499506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:56.525316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:56.561689    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.561738    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:56.565616    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:56.594009    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.594009    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:56.599822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:56.624101    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.624101    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:56.628604    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:56.657977    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.658063    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:56.663240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:56.694316    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.694316    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:56.698763    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:56.728527    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.728527    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:56.734446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:56.765315    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.765315    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:56.769182    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:56.796198    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.796198    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:56.796198    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:56.796198    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:56.864777    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:56.864777    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.904264    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:56.904264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:57.000434    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:57.000434    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:57.000434    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:57.034757    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:57.034842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:59.601768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:59.627731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:59.657009    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.657009    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:59.660962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:59.690428    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.690428    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:59.694181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:59.723517    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.723592    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:59.727191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:59.756251    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.756251    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:59.759627    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:59.791516    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.791516    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:59.795602    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:59.828192    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.828192    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:59.832003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:59.860258    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.860258    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:59.863635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:59.893207    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.893207    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:59.893207    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:59.893207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:59.958927    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:59.958927    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:00.004703    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:00.004703    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:00.096612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:00.096612    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:00.096612    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:00.124914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:00.124975    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:02.682962    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:02.708543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:02.737663    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.737663    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:02.741817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:02.772482    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.772482    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:02.778562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:02.806978    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.806978    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:02.813021    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:02.845688    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.845688    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:02.851578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:02.880144    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.880200    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:02.883811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:02.918466    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.918544    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:02.922186    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:02.951702    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.951702    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:02.955491    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:02.984638    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.984638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:02.984638    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:02.984638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:03.047941    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:03.047941    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:03.086964    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:03.086964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:03.173007    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:03.173086    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:03.173086    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:03.202017    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:03.202544    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:05.761010    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:05.786319    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:05.819785    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.819785    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:05.825532    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:05.853318    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.853318    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:05.858274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:05.887613    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.887613    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:05.891162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:05.919471    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.919471    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:05.922933    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:05.955441    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.955441    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:05.959241    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:05.984925    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.984925    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:05.989009    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:06.021101    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.021101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:06.024383    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:06.055098    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.055098    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:06.055098    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:06.055098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:06.107743    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:06.107743    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:06.170319    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:06.170319    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:06.210360    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:06.210360    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:06.299194    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:06.299194    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:06.299194    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:08.832901    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:08.860263    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:08.890111    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.890111    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:08.893617    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:08.921989    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.921989    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:08.925561    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:08.952883    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.952883    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:08.959516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:08.991347    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.991347    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:08.995066    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:09.028011    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.028011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:09.032096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:09.060803    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.060803    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:09.064596    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:09.093542    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.093572    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:09.096987    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:09.123594    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.123615    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:09.123615    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:09.123615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:09.176222    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:09.176222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:09.238935    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:09.238935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:09.278804    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:09.278804    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:09.367283    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:09.367283    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:09.367283    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:11.901781    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:11.930493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:11.963534    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.963534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:11.967747    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:11.997700    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.997700    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:12.001601    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:12.031862    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.031862    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:12.035544    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:12.066506    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.066506    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:12.071472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:12.103184    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.103184    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:12.107033    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:12.135713    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.135713    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:12.139268    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:12.170350    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.170350    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:12.174053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:12.202964    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.202964    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:12.202964    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:12.202964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:12.252669    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:12.253197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:12.318088    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:12.318088    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:12.356768    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:12.356768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:12.443857    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:12.443857    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:12.443857    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:14.980350    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:15.007303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:15.040020    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.040100    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:15.043303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:15.073502    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.073502    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:15.077944    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:15.106871    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.106871    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:15.110453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:15.138071    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.138095    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:15.141547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:15.171602    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.171659    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:15.175341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:15.207140    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.207181    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:15.210547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:15.243222    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.243222    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:15.247103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:15.280156    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.280232    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:15.280232    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:15.280232    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:15.342862    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:15.342862    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:15.384022    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:15.384022    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:15.469724    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:15.469766    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:15.469807    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:15.497606    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:15.497667    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:18.064895    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:18.090410    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:18.123378    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.123429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:18.127331    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:18.157210    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.157210    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:18.160924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:18.191242    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.191242    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:18.195064    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:18.222561    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.222561    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:18.226125    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:18.255891    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.255891    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:18.259860    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:18.288868    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.288868    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:18.292834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:18.322668    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.322668    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:18.325666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:18.353052    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.353052    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:18.353052    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:18.353052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:18.418504    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:18.418504    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:18.457348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:18.457348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:18.568946    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:18.569003    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:18.569003    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:18.602236    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:18.602236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:21.158752    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:21.184475    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:21.214582    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.214582    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:21.218375    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:21.245604    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.245604    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:21.249850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:21.281360    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.281360    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:21.286501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:21.318549    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.318601    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:21.322609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:21.353429    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.353460    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:21.357031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:21.391028    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.391028    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:21.394206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:21.423584    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.423584    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:21.427599    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:21.458683    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.458683    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:21.458683    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:21.458683    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:21.526430    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:21.526430    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:21.565490    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:21.565490    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:21.656323    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:21.656323    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:21.656323    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:21.689700    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:21.689700    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:24.246630    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:24.280925    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:24.322972    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.322972    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:24.326768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:24.355732    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.355732    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:24.359957    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:24.391937    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.392009    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:24.395559    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:24.427388    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.427388    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:24.431126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:24.459891    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.459966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:24.463468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:24.491009    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.491009    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:24.494465    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:24.524468    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.524468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:24.528017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:24.568815    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.568815    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:24.568815    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:24.568815    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:24.632772    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:24.632772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:24.671731    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:24.671731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:24.755604    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:24.755604    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:24.755604    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:24.784599    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:24.784660    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:27.338272    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:27.366367    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:27.395715    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.395715    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:27.399158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:27.427362    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.427362    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:27.430752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:27.461990    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.461990    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:27.465748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:27.492985    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.492985    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:27.497176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:27.528724    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.528724    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:27.532970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:27.571655    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.571655    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:27.575466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:27.604007    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.604068    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:27.608062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:27.639624    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.639689    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:27.639735    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:27.639735    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:27.705896    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:27.705896    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:27.745294    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:27.745294    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:27.827462    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:27.827462    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:27.827462    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:27.854463    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:27.854559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:30.412283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:30.438474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:30.469848    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.469848    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:30.473330    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:30.501713    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.501713    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:30.505748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:30.535870    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.535870    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:30.540177    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:30.572310    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.572310    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:30.576644    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:30.607087    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.607087    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:30.610334    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:30.640168    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.640168    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:30.643628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:30.671132    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.671132    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:30.677927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:30.708536    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.708536    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:30.708536    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:30.708536    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:30.773222    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:30.773222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:30.812763    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:30.812763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:30.932347    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:30.932397    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:30.932444    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:30.961663    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:30.961663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.524404    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:33.548624    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:33.580753    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.580845    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:33.583912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:33.613001    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.613048    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:33.616808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:33.645262    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.645262    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:33.649044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:33.677477    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.677562    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:33.681205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:33.710607    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.710669    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:33.714600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:33.742889    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.742889    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:33.746623    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:33.777022    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.777022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:33.780455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:33.809525    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.809525    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:33.809525    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:33.809525    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.860852    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:33.860936    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:33.924768    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:33.924768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:33.962632    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:33.962632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:34.054124    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:34.054124    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:34.054124    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.589465    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:36.617658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:36.652432    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.652432    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:36.656189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:36.694709    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.694709    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:36.700040    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:36.729913    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.729913    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:36.733870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:36.762591    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.762591    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:36.766493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:36.796414    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.796414    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:36.800540    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:36.828148    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.828148    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:36.833323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:36.862390    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.862390    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:36.866173    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:36.895727    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.895814    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:36.895814    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:36.895814    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.926240    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:36.926240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:36.975760    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:36.975760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:37.036350    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:37.036350    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:37.072745    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:37.072745    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:37.161612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:39.667288    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:39.691212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:39.724148    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.724148    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:39.727935    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:39.761821    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.761821    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:39.765852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:39.793659    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.793696    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:39.797422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:39.825439    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.825473    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:39.828751    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:39.859011    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.859011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:39.862518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:39.891552    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.891613    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:39.894978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:39.926857    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.926857    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:39.930363    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:39.975835    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.975835    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:39.975835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:39.975835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:40.070107    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:40.070107    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:40.070107    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:40.098563    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:40.098605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:40.147476    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:40.147476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:40.212702    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:40.212702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:42.757339    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:42.786178    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:42.817429    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.817429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:42.821164    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:42.850363    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.850415    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:42.854031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:42.881774    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.881774    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:42.885802    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:42.915556    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.915556    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:42.919184    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:42.948329    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.948329    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:42.952430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:42.982355    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.982355    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:42.986768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:43.017700    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.017700    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:43.021284    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:43.052749    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.052779    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:43.052779    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:43.052813    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:43.091605    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:43.091605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:43.175861    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:43.175861    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:43.175861    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:43.204569    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:43.204569    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:43.257132    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:43.257132    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:45.825092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:45.853653    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:45.886780    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.886780    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:45.890416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:45.921840    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.923184    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:45.928382    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:45.960187    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.960252    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:45.963959    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:45.993658    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.993712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:45.997113    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:46.024308    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.024308    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:46.027994    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:46.060725    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.060725    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:46.064446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:46.092825    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.092825    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:46.098256    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:46.129614    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.129688    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:46.129688    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:46.129688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:46.216242    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:46.216263    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:46.216263    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:46.248767    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:46.248767    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:46.298044    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:46.298044    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:46.363186    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:46.363186    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:48.911992    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:48.946588    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:48.983880    6296 logs.go:282] 0 containers: []
	W1217 02:13:48.983880    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:48.987999    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:49.017254    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.017254    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:49.021239    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:49.053619    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.053619    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:49.057711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:49.086289    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.086289    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:49.090230    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:49.123069    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.123069    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:49.130107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:49.158724    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.158724    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:49.162733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:49.193515    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.193573    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:49.197116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:49.230153    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.230201    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:49.230245    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:49.230245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:49.259747    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:49.259747    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:49.312360    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:49.312456    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:49.375035    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:49.375035    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:49.413908    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:49.413908    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:49.508187    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:52.012834    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:52.037104    6296 out.go:203] 
	W1217 02:13:52.039462    6296 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:13:52.039520    6296 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	* Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:13:52.039588    6296 out.go:285] * Related issues:
	* Related issues:
	W1217 02:13:52.039588    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	  - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:13:52.039635    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	  - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:13:52.041923    6296 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:257: failed to start minikube post-stop. args "out/minikube-windows-amd64.exe start -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0": exit status 105
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-383500
helpers_test.go:244: (dbg) docker inspect newest-cni-383500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638",
	        "Created": "2025-12-17T01:57:11.100405677Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 462672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:07:38.479713902Z",
	            "FinishedAt": "2025-12-17T02:07:35.952064424Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hostname",
	        "HostsPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hosts",
	        "LogPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638-json.log",
	        "Name": "/newest-cni-383500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-383500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-383500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-383500",
	                "Source": "/var/lib/docker/volumes/newest-cni-383500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-383500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-383500",
	                "name.minikube.sigs.k8s.io": "newest-cni-383500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1db633168a5c321973d71a3d7a937d0960662192a945d2448f4398b25b744030",
	            "SandboxKey": "/var/run/docker/netns/1db633168a5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63784"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-383500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0a3f566cb0e1e68eaf85fc99a3ee131940651a4c9a15e291bc077be33f07b4e",
	                    "EndpointID": "d5e1ca0ef443df8c9e41596f8db19fb0cd842fc42e6efd30a71aaa1d3fefb2d9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-383500",
	                        "58edac260513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (595.2326ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/SecondStart]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25: (1.6948266s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2                                                                             │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ old-k8s-version-044000 image list --format=json                                                                                                                                                                            │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ pause   │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ stop    │ -p newest-cni-383500 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-383500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │                     │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:07:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:07:37.336708    6296 out.go:360] Setting OutFile to fd 968 ...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.380113    6296 out.go:374] Setting ErrFile to fd 1700...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.394455    6296 out.go:368] Setting JSON to false
	I1217 02:07:37.396490    6296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8845,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:07:37.397485    6296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:07:37.401853    6296 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:07:37.405009    6296 notify.go:221] Checking for updates...
	I1217 02:07:37.407761    6296 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:37.412054    6296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:07:37.415031    6296 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:07:37.416942    6296 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:07:37.418887    6296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 02:07:37.439676    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:37.422499    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:37.422499    6296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:07:37.541250    6296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:07:37.544536    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:37.790862    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:37.763465755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:37.793941    6296 out.go:179] * Using the docker driver based on existing profile
	I1217 02:07:37.795944    6296 start.go:309] selected driver: docker
	I1217 02:07:37.795944    6296 start.go:927] validating driver "docker" against &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:37.796941    6296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:07:37.881125    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:38.106129    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:38.085504737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:38.106129    6296 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:07:38.106129    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:38.106661    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:38.106789    6296 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:38.110370    6296 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 02:07:38.113499    6296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:07:38.115628    6296 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:07:38.118799    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:38.118867    6296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:07:38.118972    6296 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 02:07:38.119036    6296 cache.go:65] Caching tarball of preloaded images
	I1217 02:07:38.119094    6296 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 02:07:38.119094    6296 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 02:07:38.119094    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.197259    6296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:07:38.197259    6296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:07:38.197259    6296 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:07:38.197259    6296 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:07:38.197259    6296 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-383500"
	I1217 02:07:38.197259    6296 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:07:38.197259    6296 fix.go:54] fixHost starting: 
	I1217 02:07:38.204499    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.259240    6296 fix.go:112] recreateIfNeeded on newest-cni-383500: state=Stopped err=<nil>
	W1217 02:07:38.259240    6296 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:07:38.262335    6296 out.go:252] * Restarting existing docker container for "newest-cni-383500" ...
	I1217 02:07:38.265716    6296 cli_runner.go:164] Run: docker start newest-cni-383500
	I1217 02:07:38.804123    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.863188    6296 kic.go:430] container "newest-cni-383500" state is running.
	I1217 02:07:38.868900    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:38.924169    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.926083    6296 machine.go:94] provisionDockerMachine start ...
	I1217 02:07:38.928987    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:38.984001    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:38.984993    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:38.984993    6296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:07:38.986003    6296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:07:42.161557    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.161646    6296 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 02:07:42.166827    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.231443    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.231698    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.231698    6296 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 02:07:42.423907    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.432743    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.491085    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.491085    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.491085    6296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:07:42.667009    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:42.667009    6296 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:07:42.667009    6296 ubuntu.go:190] setting up certificates
	I1217 02:07:42.667009    6296 provision.go:84] configureAuth start
	I1217 02:07:42.671320    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:42.724474    6296 provision.go:143] copyHostCerts
	I1217 02:07:42.725072    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:07:42.725072    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:07:42.725072    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:07:42.726229    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:07:42.726229    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:07:42.726812    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:07:42.727386    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:07:42.727386    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:07:42.727386    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:07:42.728644    6296 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 02:07:42.882778    6296 provision.go:177] copyRemoteCerts
	I1217 02:07:42.886667    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:07:42.889412    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.946034    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:43.080244    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:07:43.111350    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:07:43.145228    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:07:43.176328    6296 provision.go:87] duration metric: took 509.312ms to configureAuth
	I1217 02:07:43.176328    6296 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:07:43.176328    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:43.180705    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.236378    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.237514    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.237514    6296 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:07:43.404492    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:07:43.404492    6296 ubuntu.go:71] root file system type: overlay
	I1217 02:07:43.405056    6296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:07:43.408624    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.465282    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.465408    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.465408    6296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:07:43.658319    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:07:43.662395    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.719191    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.719552    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.719552    6296 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:07:43.890999    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:43.890999    6296 machine.go:97] duration metric: took 4.9648419s to provisionDockerMachine
	I1217 02:07:43.890999    6296 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 02:07:43.890999    6296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:07:43.895385    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:07:43.899109    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.952181    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.085157    6296 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:07:44.092998    6296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:07:44.093086    6296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:07:44.093086    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:07:44.093465    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:07:44.094379    6296 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:07:44.099969    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:07:44.115031    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:07:44.146317    6296 start.go:296] duration metric: took 255.2637ms for postStartSetup
	I1217 02:07:44.150381    6296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:07:44.153098    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.206142    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.337637    6296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:07:44.346313    6296 fix.go:56] duration metric: took 6.1489614s for fixHost
	I1217 02:07:44.346313    6296 start.go:83] releasing machines lock for "newest-cni-383500", held for 6.1489614s
	I1217 02:07:44.350643    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:44.409164    6296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:07:44.413957    6296 ssh_runner.go:195] Run: cat /version.json
	I1217 02:07:44.414540    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.416694    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.466739    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.469418    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 02:07:44.591848    6296 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:07:44.598090    6296 ssh_runner.go:195] Run: systemctl --version
	I1217 02:07:44.614283    6296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:07:44.624324    6296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:07:44.628955    6296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:07:44.642200    6296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:07:44.642243    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:44.642333    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:44.642453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:44.671216    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:07:44.689408    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:07:44.702919    6296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:07:44.707856    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:07:44.727869    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.747180    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1217 02:07:44.751020    6296 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:07:44.751020    6296 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:07:44.766866    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.786853    6296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:07:44.806986    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:07:44.828346    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:07:44.848400    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:07:44.870349    6296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:07:44.887217    6296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:07:44.905216    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.047629    6296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:07:45.203749    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:45.203842    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:45.209421    6296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:07:45.236823    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.259331    6296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:07:45.337368    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.361492    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:07:45.381383    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:45.409600    6296 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:07:45.421762    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:07:45.435668    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:07:45.461708    6296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:07:45.616228    6296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:07:45.751670    6296 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:07:45.751670    6296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:07:45.778504    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:07:45.800985    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.956342    6296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:07:46.816501    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:07:46.840410    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:07:46.865817    6296 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:07:46.890943    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:46.914319    6296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:07:47.058242    6296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:07:47.214522    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.355565    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	W1217 02:07:47.472644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:47.382801    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:07:47.407455    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.558893    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:07:47.666138    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:47.686246    6296 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:07:47.690618    6296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:07:47.697013    6296 start.go:564] Will wait 60s for crictl version
	I1217 02:07:47.702316    6296 ssh_runner.go:195] Run: which crictl
	I1217 02:07:47.713878    6296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:07:47.755301    6296 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:07:47.758809    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.803772    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.845573    6296 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:07:47.849368    6296 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 02:07:47.978778    6296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:07:47.983162    6296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:07:47.993198    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.011887    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:48.072090    6296 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:07:48.073820    6296 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:07:48.073820    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:48.077080    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.110342    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.110411    6296 docker.go:621] Images already preloaded, skipping extraction
	I1217 02:07:48.113821    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.144461    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.144530    6296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:07:48.144530    6296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:07:48.144779    6296 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:07:48.149102    6296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:07:48.225894    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:48.225894    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:48.225894    6296 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:07:48.225894    6296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:07:48.226504    6296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:07:48.230913    6296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:07:48.243749    6296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:07:48.248634    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:07:48.262382    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:07:48.284386    6296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:07:48.306623    6296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 02:07:48.332101    6296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:07:48.341865    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.360919    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:48.498620    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:48.520308    6296 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 02:07:48.520346    6296 certs.go:195] generating shared ca certs ...
	I1217 02:07:48.520390    6296 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:48.520420    6296 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:07:48.521152    6296 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:07:48.521359    6296 certs.go:257] generating profile certs ...
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 02:07:48.522472    6296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 02:07:48.523217    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:07:48.523515    6296 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:07:48.523598    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:07:48.523888    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:07:48.524140    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:07:48.524399    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:07:48.525045    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:07:48.526649    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:07:48.558725    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:07:48.590333    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:07:48.621493    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:07:48.650907    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:07:48.678948    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:07:48.708871    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:07:48.738822    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:07:48.769873    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:07:48.801411    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:07:48.828208    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:07:48.859551    6296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:07:48.888197    6296 ssh_runner.go:195] Run: openssl version
	I1217 02:07:48.903194    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.920018    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:07:48.936734    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.943690    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.948571    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.997651    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:07:49.015514    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.035513    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:07:49.056511    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.065394    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.070742    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.117805    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:07:49.140198    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.156992    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:07:49.175485    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.184194    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.187479    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.237543    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:07:49.254809    6296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:07:49.269508    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:07:49.317073    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:07:49.365797    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:07:49.413853    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:07:49.462871    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:07:49.515512    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:07:49.558666    6296 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:49.563317    6296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:07:49.602899    6296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:07:49.616365    6296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:07:49.616365    6296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:07:49.622022    6296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:07:49.637152    6296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:07:49.641090    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.693295    6296 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.693843    6296 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-383500" cluster setting kubeconfig missing "newest-cni-383500" context setting]
	I1217 02:07:49.694722    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.716755    6296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:07:49.731850    6296 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:07:49.731850    6296 kubeadm.go:602] duration metric: took 115.4836ms to restartPrimaryControlPlane
	I1217 02:07:49.731850    6296 kubeadm.go:403] duration metric: took 173.1816ms to StartCluster
	I1217 02:07:49.731850    6296 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.731850    6296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.732839    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.734654    6296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:07:49.734654    6296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:07:49.734654    6296 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:70] Setting dashboard=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:49.734654    6296 addons.go:70] Setting default-storageclass=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.734654    6296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon dashboard=true in "newest-cni-383500"
	W1217 02:07:49.734654    6296 addons.go:248] addon dashboard should already be in state true
	I1217 02:07:49.735179    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.739634    6296 out.go:179] * Verifying Kubernetes components...
	I1217 02:07:49.743427    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.745812    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:49.809135    6296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:07:49.809532    6296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:07:49.812989    6296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:49.812989    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:07:49.816981    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.817010    6296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:07:49.818467    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:07:49.818467    6296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:07:49.823270    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.824987    6296 addons.go:239] Setting addon default-storageclass=true in "newest-cni-383500"
	I1217 02:07:49.825100    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.836645    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.889991    6296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:49.889991    6296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:07:49.892991    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.925992    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:49.943010    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.950996    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:50.005058    6296 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:07:50.009064    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.011068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.014077    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:07:50.014077    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:07:50.034057    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:07:50.034057    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:07:50.102553    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:07:50.102611    6296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:07:50.106900    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.124027    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:07:50.124027    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:07:50.189590    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:07:50.189677    6296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1217 02:07:50.190082    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.190082    6296 retry.go:31] will retry after 343.200838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.212250    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:07:50.212311    6296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:07:50.231619    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:07:50.231619    6296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:07:50.241078    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.241078    6296 retry.go:31] will retry after 338.608253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.254747    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:07:50.254794    6296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:07:50.277655    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:50.277655    6296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:07:50.303268    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.381205    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.381205    6296 retry.go:31] will retry after 204.689537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.510673    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.538343    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.585518    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.590250    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.625635    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.625793    6296 retry.go:31] will retry after 198.686568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.703247    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.703247    6296 retry.go:31] will retry after 199.792365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.713669    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.714671    6296 retry.go:31] will retry after 441.125735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.831068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.910787    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:50.921027    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.921027    6296 retry.go:31] will retry after 637.088373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.993148    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.993148    6296 retry.go:31] will retry after 819.774881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
E1217 02:13:57.029335    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.009768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.161082    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:51.282295    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.282369    6296 retry.go:31] will retry after 677.278565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.510844    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.563702    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:51.642986    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.642986    6296 retry.go:31] will retry after 1.231128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.817677    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:51.902470    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.902470    6296 retry.go:31] will retry after 1.160161898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.964724    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:52.009393    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:52.053520    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.053520    6296 retry.go:31] will retry after 497.775491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.510530    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:52.556698    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:52.641425    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.641425    6296 retry.go:31] will retry after 893.419079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.880811    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:52.961643    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.961643    6296 retry.go:31] will retry after 1.354718896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.009905    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.068292    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:53.159843    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.159885    6296 retry.go:31] will retry after 830.811591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.510300    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.539679    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:53.634195    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.634195    6296 retry.go:31] will retry after 1.875797166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.997012    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:54.010116    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:54.085004    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.085004    6296 retry.go:31] will retry after 2.403477641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.321510    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:54.401677    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.401677    6296 retry.go:31] will retry after 2.197762331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.509750    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.011577    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.509949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.514301    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:55.590724    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:55.590724    6296 retry.go:31] will retry after 3.771224323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.010995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:56.493760    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:56.509755    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:56.580067    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.580067    6296 retry.go:31] will retry after 2.862008002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.606008    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:56.692846    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.693375    6296 retry.go:31] will retry after 3.419223727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:57.009866    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:57.510945    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:57.510327    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.010333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.511391    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.013796    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.367655    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:59.447582    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:59.457416    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.457416    6296 retry.go:31] will retry after 6.254269418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.510215    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:59.536524    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.536524    6296 retry.go:31] will retry after 4.240139996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.010517    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.118263    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:00.197472    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.197472    6296 retry.go:31] will retry after 5.486941273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.511349    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.012031    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.510877    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.510995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.511479    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.781390    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:03.867561    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:03.867561    6296 retry.go:31] will retry after 5.255488401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:04.011296    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:04.510695    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.011055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.510174    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.690069    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:05.718147    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:05.792389    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.792389    6296 retry.go:31] will retry after 3.294946391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:05.802187    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.802187    6296 retry.go:31] will retry after 6.599881974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:06.010721    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.509941    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.010092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:07.543861    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:07.511303    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.011059    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.511015    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.009909    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.092821    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:09.127423    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:09.180638    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.180716    6296 retry.go:31] will retry after 13.056189647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:09.211988    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.212069    6296 retry.go:31] will retry after 13.872512266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.510829    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.010907    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.513112    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.010572    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.509543    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.010570    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.409071    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:12.497495    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.497495    6296 retry.go:31] will retry after 9.788092681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.510004    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.011338    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.509984    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.010499    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.511126    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.010949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.511741    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.011278    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.511157    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.010863    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:17.577088    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:17.511273    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.010782    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.510594    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.011193    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.512050    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.011700    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.511001    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.010461    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.510457    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.011002    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.242227    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:22.290434    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:22.384800    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.384884    6296 retry.go:31] will retry after 11.75975207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:22.424758    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.424758    6296 retry.go:31] will retry after 15.557196078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.510556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.011645    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.090496    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:23.176544    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.176625    6296 retry.go:31] will retry after 13.26458747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.510872    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.011245    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.511483    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.011656    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.510967    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.012125    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.512672    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.011155    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:27.612061    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:27.512368    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.010889    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.511767    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.011035    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.512111    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.010919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.510464    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.010433    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.511392    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.010680    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.510963    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.011818    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.511638    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.011591    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.151810    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:34.242474    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.242474    6296 retry.go:31] will retry after 23.644538854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.513602    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.011269    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.511142    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.011267    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.446774    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:08:36.511283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:36.541778    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:36.541860    6296 retry.go:31] will retry after 14.024805043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:37.010743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:37.653192    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:37.510520    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.987959    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:08:38.011587    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:38.113276    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.113276    6296 retry.go:31] will retry after 20.609884455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.511817    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.012624    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.511353    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.011079    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.511636    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.011582    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.512671    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.011503    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.511640    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.011054    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.510485    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.011395    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.511333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.011435    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.513316    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.012600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.512307    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.012227    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.512888    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.011996    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.511276    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.011053    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.511776    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.011678    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:50.050889    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.050889    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.055201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:50.085770    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.085770    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.090316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:50.123762    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.123762    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.127529    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:50.157626    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.157626    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.163652    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:50.189945    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.189945    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.193620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:50.222819    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.222866    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.227818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:50.256909    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.256909    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.260970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:50.290387    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.290387    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.290387    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:50.290387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.357876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.357876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.420098    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.420098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.460376    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.460376    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.542989    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.542989    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:50.542989    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:50.570331    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:50.645772    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:50.645772    6296 retry.go:31] will retry after 16.344343138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:47.695483    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:53.075519    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.098924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:53.131675    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.131675    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.135542    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:53.166511    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.166511    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.170265    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:53.198547    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.198547    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.202694    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:53.232459    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.232459    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.235758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:53.263802    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.263802    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.268318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:53.296956    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.296956    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.301349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:53.331331    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.331331    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.335255    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:53.367520    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.367550    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.367577    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.367602    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.453750    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.453837    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:53.453887    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:53.485058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.485058    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.540050    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.540050    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.604101    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.604101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.146858    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.172227    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:56.203897    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.203941    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.207562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:56.236114    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.236114    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.240341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:56.274958    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.274958    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.280577    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:56.308906    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.308906    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.312811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:56.340777    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.340836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.343843    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:56.371408    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.371441    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.374771    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:56.406487    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.406487    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.410973    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:56.441247    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.441247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.441247    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.441247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.506877    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.506877    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.548841    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.548841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.633101    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.633101    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:56.633101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:56.659421    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.659457    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:57.892877    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:57.970838    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:57.970838    6296 retry.go:31] will retry after 27.385193451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.728649    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:58.834139    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.834680    6296 retry.go:31] will retry after 32.13321777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:59.213728    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.238361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:59.266298    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.266298    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.270295    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:59.299414    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.299414    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.302581    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:59.335627    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.335627    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.339238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:59.367042    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.367042    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.371258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:59.401507    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.401507    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.405468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:59.436657    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.436657    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.440955    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:59.471027    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.471027    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.474047    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:59.505164    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.505164    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.505164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:59.505164    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:59.533835    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.533835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.586695    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.587671    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.648841    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.648841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.688691    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.688691    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.777044    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.282707    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.307570    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:02.340326    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.340412    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.343993    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:02.374035    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.374079    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.377688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1217 02:08:57.736771    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:02.409724    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.409724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.414154    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:02.442993    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.442993    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.447591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:02.474966    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.474966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.479447    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:02.511675    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.511675    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.515939    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:02.544034    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.544034    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.548633    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:02.578196    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.578196    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.578196    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.578196    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.642449    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.643420    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:02.681562    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.681562    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.766017    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.766017    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:02.766017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:02.795166    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.795166    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.347132    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.372840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:05.424611    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.424686    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.428337    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:05.461682    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.461682    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.465790    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:05.495395    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.495395    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.499215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:05.528620    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.528620    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.532226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:05.560375    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.560375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.564119    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:05.595214    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.595214    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.600088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:05.633183    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.633183    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.636776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:05.664840    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.664840    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.664840    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.664840    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.718503    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.718503    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.781489    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.781489    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.821081    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.821081    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.905451    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.905451    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:05.905451    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:06.996471    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:09:07.077056    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:07.077056    6296 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:08.443326    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.470285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:08.499191    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.499191    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.503346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:08.531727    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.531727    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.535874    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:08.567724    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.567724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.571504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:08.601814    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.601814    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.605003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:08.638738    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.638815    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.642116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:08.672949    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.672949    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.676953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:08.706081    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.706145    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.709298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:08.737856    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.737856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.737856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.737856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.798236    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.798236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.838053    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.838053    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.925271    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.925271    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:08.925271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:08.952860    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.952934    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.505032    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.532273    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:11.560855    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.560907    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.564808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:11.595967    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.596024    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.599911    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:11.628443    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.628443    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.632103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:11.659899    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.659899    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.663896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:11.695830    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.695864    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.699333    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:11.728245    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.728314    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.731834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:11.762004    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.762038    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.765497    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:11.800437    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.800437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.800437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.800437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.850659    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.850659    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:11.927328    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.927328    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.968115    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.968115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:12.061366    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:12.061366    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:12.061366    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:07.775163    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:14.593463    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.619698    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:14.649625    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.649625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.653809    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:14.682807    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.682865    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.686225    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:14.716867    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.716867    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.720947    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:14.748712    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.748712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.753598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:14.786467    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.786467    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.790745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:14.820388    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.820388    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.824364    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:14.856683    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.856715    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.860387    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:14.907334    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.907388    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.907388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.907388    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.970536    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.971543    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:15.009837    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:15.009837    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:15.100833    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:15.100833    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:15.100833    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:15.129774    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:15.129838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.687506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.711884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:17.740676    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.740676    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.743807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:17.775526    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.775598    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.779196    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:17.810564    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.810564    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.815366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:17.847149    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.847149    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.850304    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:17.880825    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.880825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.884416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:17.913663    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.913663    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.917519    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:17.949675    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.949736    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.953399    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:17.981777    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.981777    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.981853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.981853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:18.045143    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:18.045143    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:18.085682    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:18.085682    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:18.174824    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:18.174862    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:18.174890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:18.201721    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:18.201721    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:20.754573    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.779418    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:20.815289    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.815336    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.821329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:20.849494    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.849566    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.853416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:20.886139    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.886213    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.890864    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:20.921623    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.921691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.925413    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:20.955001    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.955030    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.959115    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:20.986446    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.986446    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.990622    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:21.019381    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.019903    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:21.023386    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:21.049708    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.049708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:21.049708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:21.049708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:21.114512    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:21.114512    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:21.154312    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:21.154312    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:21.241835    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:21.241835    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:21.241835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:21.269935    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:21.269935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:17.811223    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:23.827385    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.851293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:23.884017    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.884017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.887852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:23.920819    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.920819    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.925124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:23.953397    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.953468    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.957090    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:23.987965    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.987965    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.992238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:24.021188    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.021188    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:24.027472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:24.059066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.059066    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:24.062927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:24.092066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.092066    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:24.096083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:24.130020    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.130083    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:24.130083    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:24.130083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:24.193264    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:24.193264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:24.233590    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:24.233590    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:24.334738    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:24.334738    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:24.334738    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:24.361711    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:24.361711    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:25.361736    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:09:25.443830    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:25.443830    6296 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:26.915928    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.940552    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:26.972265    6296 logs.go:282] 0 containers: []
	W1217 02:09:26.972334    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.975468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:27.004131    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.004131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:27.007688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:27.040755    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.040755    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:27.044298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:27.075607    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.075607    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:27.079764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:27.109726    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.109726    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:27.113807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:27.142060    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.142060    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:27.145049    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:27.179827    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.179898    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:27.183340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:27.212340    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.212340    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:27.212340    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:27.212340    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:27.290453    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:27.290453    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:27.290453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:27.317919    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:27.317919    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:27.372636    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:27.372636    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:27.434881    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:27.434881    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.980965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:30.007081    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:30.038766    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.038766    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:30.042837    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:30.074216    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.074277    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:30.077495    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:30.109815    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.109815    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:30.113543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:30.144692    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.144692    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:30.148595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:30.181530    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.181530    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:30.185056    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:30.230054    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.230054    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:30.233965    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:30.264421    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.264421    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:30.268191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:30.302463    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.302463    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:30.302463    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:30.302463    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:30.369905    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:30.369905    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:30.407364    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:30.407364    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:30.501045    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:30.501045    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:30.501045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:30.529058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:30.529119    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:30.973740    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:09:31.053832    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:31.053832    6296 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:31.057712    6296 out.go:179] * Enabled addons: 
	I1217 02:09:31.060716    6296 addons.go:530] duration metric: took 1m41.3245326s for enable addons: enabled=[]
	W1217 02:09:27.847902    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:33.093000    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:33.117479    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:33.148299    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.148299    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:33.152403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:33.180747    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.180747    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:33.184258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:33.214319    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.214389    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:33.217921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:33.244463    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.244463    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:33.248882    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:33.280520    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.280573    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:33.284251    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:33.313836    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.313883    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:33.318949    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:33.351545    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.351545    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:33.355242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:33.384638    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.384638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:33.384638    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:33.384638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:33.438624    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:33.438624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:33.503148    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:33.504145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:33.542770    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:33.542770    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:33.628872    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:33.628872    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:33.628872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:36.163766    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:36.190660    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:36.219485    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.219485    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:36.223169    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:36.253826    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.253826    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:36.257584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:36.289684    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.289684    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:36.293455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:36.321228    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.321228    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:36.326076    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:36.355893    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.355893    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:36.360432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:36.392307    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.392359    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:36.395377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:36.427797    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.427797    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:36.431432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:36.465462    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.465547    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:36.465590    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:36.465605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:36.515585    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:36.515688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:36.577828    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:36.577828    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:36.617923    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:36.617923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:36.706865    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:36.706865    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:36.706865    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.240583    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:39.269426    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:39.300548    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.300548    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:39.304455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:39.337640    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.337640    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:39.341427    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:39.375280    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.375280    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:39.379328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:39.408206    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.408291    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:39.413138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:39.439760    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.439760    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:39.443728    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:39.470865    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.471120    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:39.477630    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:39.510101    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.510101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:39.515759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:39.545423    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.545494    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:39.545494    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:39.545559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.574474    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:39.574474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:39.627410    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:39.627410    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:39.687852    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:39.687852    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:39.730823    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:39.730823    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:39.820771    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.326489    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:42.349989    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:42.381673    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.381673    6296 logs.go:284] No container was found matching "kube-apiserver"
	W1217 02:09:37.889672    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:42.385392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:42.414575    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.414575    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:42.418510    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:42.452120    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.452120    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:42.456157    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:42.484625    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.484625    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:42.487782    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:42.520235    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.520235    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:42.525546    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:42.558589    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.558589    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:42.561770    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:42.592364    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.592364    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:42.596368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:42.625522    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.625522    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:42.625522    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:42.625522    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:42.661616    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:42.661616    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:42.748046    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.748046    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:42.748046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:42.778854    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:42.778854    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:42.827860    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:42.827860    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.394220    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:45.418501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:45.453084    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.453132    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:45.457433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:45.491679    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.491679    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:45.495517    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:45.524934    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.524934    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:45.528788    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:45.559787    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.559837    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:45.563714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:45.608019    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.608104    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:45.612132    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:45.639869    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.639869    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:45.644002    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:45.671767    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.671767    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:45.675466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:45.704056    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.704104    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:45.704104    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:45.704104    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.766557    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:45.766557    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:45.807449    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:45.807449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:45.898686    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:45.898686    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:45.898686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:45.924614    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:45.924614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.482563    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:48.510137    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:48.546063    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.546063    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:48.551905    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:48.588536    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.588617    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:48.592628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:48.621540    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.621540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:48.625701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:48.653505    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.653505    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:48.659485    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:48.688940    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.689008    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:48.692649    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:48.718858    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.718858    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:48.722907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:48.752451    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.752451    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:48.755913    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:48.785865    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.785903    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:48.785903    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:48.785948    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.842730    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:48.843261    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:48.905352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:48.905352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:48.945271    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:48.945271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.027913    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.027963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:49.027963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:51.563182    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:51.587223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:51.619597    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.619621    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:51.623355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:51.652069    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.652152    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:51.655716    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:51.684602    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.684653    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:51.687735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:51.716327    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.716327    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:51.720054    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:51.750202    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.750266    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:51.753821    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:51.781863    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.781863    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:51.785648    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:51.814791    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.814841    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:51.818565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:51.850654    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.850654    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:51.850654    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:51.850654    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:51.912429    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:51.912429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:51.951795    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:51.951795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.035486    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.035486    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:52.035486    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:52.063472    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.063472    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:47.930106    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:54.631678    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:54.657392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:54.689037    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.689037    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:54.692460    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:54.723231    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.723231    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:54.729158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:54.759168    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.759168    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:54.762883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:54.792371    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.792371    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:54.796165    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:54.828375    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.828375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:54.832201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:54.862409    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.862476    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:54.866107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:54.897161    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.897161    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:54.900834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:54.947452    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.947452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:54.947452    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:54.947452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.016411    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.016411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.055628    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.055628    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.152557    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.152599    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:55.152599    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:55.180492    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.180492    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:57.741989    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:57.768328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:57.799200    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.799200    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:57.803065    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:57.832042    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.832042    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:57.835921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:57.863829    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.863891    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:57.867347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:57.896797    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.896822    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:57.900369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:57.929832    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.929907    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:57.933326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:57.960278    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.960278    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:57.964215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:57.992277    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.992324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:57.995951    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:58.026155    6296 logs.go:282] 0 containers: []
	W1217 02:09:58.026254    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.026254    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.026303    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.091999    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.091999    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.131520    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.131520    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.226831    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.226831    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:58.226831    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:58.256592    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.256635    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:00.809919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:00.842222    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:00.872955    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.872955    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:00.876666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:00.906031    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.906031    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:00.909593    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:00.939873    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.939946    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:00.943346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:00.972609    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.972643    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:00.975886    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:01.005269    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.005269    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.009766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:01.041677    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.041677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.048361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:01.081235    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.081312    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.084849    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:01.113437    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.113437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.113437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.113437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.160067    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.160624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.225071    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.225071    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.265307    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.265307    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.348506    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.348535    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:01.348571    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:57.967423    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:03.891628    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:03.925404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:03.965688    6296 logs.go:282] 0 containers: []
	W1217 02:10:03.965688    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:03.968982    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:04.006348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.006348    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.009769    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:04.039968    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.039968    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.044404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:04.078472    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.078472    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.081894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:04.113348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.113348    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.117138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:04.148885    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.148885    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.152756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:04.181559    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.181616    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.185351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:04.217017    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.217017    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.217017    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.217017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.284540    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.284540    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.324402    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.324402    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.409943    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:04.409943    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:04.409943    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:04.438771    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.438771    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:06.997897    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.024185    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:07.054915    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.055512    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.060167    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:07.089778    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.089778    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.093773    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:07.124641    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.124641    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.128016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:07.154834    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.154915    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.158505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:07.188568    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.188568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.192962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:07.225078    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.225078    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.228699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:07.258599    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.258659    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.262590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:07.291623    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.291623    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.291623    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:07.291623    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:07.322611    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.322611    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.374970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.374970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.438795    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.438795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:07.479442    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.479442    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.566162    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.072312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.096505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:10.125617    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.125617    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.129377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:10.157921    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.157921    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.161850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:10.191705    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.191705    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.196003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:10.224412    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.224482    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.229368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:10.258140    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.258140    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.261205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:10.292047    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.292047    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.296511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:10.325818    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.325818    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.329752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:10.359454    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.359530    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.359530    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.359530    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:10.413970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.413970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.476665    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.476665    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.516335    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.516335    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.602353    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.602353    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:10.602353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:10:08.007712    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:13.134148    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:13.159720    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:13.191534    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.191534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.195626    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:13.230035    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.230035    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.233817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:13.266476    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.266476    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.270598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:13.305852    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.305852    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.310349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:13.341805    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.341867    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.345346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:13.377945    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.377945    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.381659    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:13.411885    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.411957    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.416039    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:13.446642    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.446642    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.446642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.446642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.487083    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.487083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.574632    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.574632    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:13.574632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.604181    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.604702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:13.660020    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.660020    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.225038    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:16.248922    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:16.280247    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.280247    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:16.284285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:16.312596    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.312596    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:16.316952    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:16.345108    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.345108    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:16.348083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:16.377403    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.377403    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.380619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:16.410555    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.410555    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.414048    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:16.446454    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.446454    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.449405    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:16.478967    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.478967    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.484108    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:16.516422    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.516422    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.516422    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.516422    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.580305    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.580305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.618663    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.618663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.705105    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.705105    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:16.705105    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:16.732046    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.732046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:19.284431    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:19.307909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:19.340842    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.340842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:19.344830    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:19.371150    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.371150    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:19.374863    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:19.403216    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.403216    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:19.406907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:19.433979    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.433979    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:19.438046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:19.469636    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.469636    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:19.473675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:19.504296    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.504296    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.508671    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:19.535932    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.535932    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.539707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:19.567355    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.567416    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.567416    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.567416    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.629876    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.629876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.678547    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.678547    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.785306    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.785306    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:19.785371    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:19.813137    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.813137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.369643    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:10:18.049946    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:22.396731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:22.431018    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.431018    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:22.434688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:22.463307    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.463307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:22.467323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:22.497065    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.497065    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:22.500574    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:22.531497    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.531564    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:22.535088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:22.563706    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.563779    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:22.567344    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:22.602516    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.602597    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:22.606242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:22.637637    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.637699    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:22.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:22.668078    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.668078    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.668078    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.668078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.754963    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.754963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:22.754963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:22.783172    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.783222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.840048    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.840048    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.900137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.900137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.445900    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:25.472646    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:25.502929    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.502929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:25.506274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:25.537721    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.537721    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:25.543044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:25.572924    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.572924    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:25.576391    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:25.607737    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.607798    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:25.611457    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:25.644967    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.645041    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:25.648690    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:25.677801    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.677801    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:25.681530    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:25.709148    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.709148    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:25.715667    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:25.746892    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.746892    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:25.746892    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:25.746892    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:25.796336    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:25.796336    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.862353    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.862353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.902100    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.902100    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.988926    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.988926    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:25.988926    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:28.523475    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:28.549366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:28.580055    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.580055    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:28.583822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:28.615168    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.615168    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:28.618724    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:28.650344    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.650368    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:28.654014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:28.704033    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.704033    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:28.707699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:28.738871    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.738938    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:28.743270    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:28.775432    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.775432    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:28.779176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:28.810234    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.810351    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:28.814357    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:28.845783    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.845783    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:28.845783    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:28.845783    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:28.902626    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:28.902626    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.963758    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.963758    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:29.002141    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:29.002141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:29.104674    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:29.104674    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:29.104674    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:31.640270    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:31.668862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:31.703099    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.703099    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:31.706355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:31.737408    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.737408    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:31.741549    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:31.771462    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.771549    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:31.775645    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:31.803600    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.803600    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:31.807313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:31.835884    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.835884    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:31.840000    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:31.870518    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.870518    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:31.877548    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:31.905387    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.905387    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:31.909722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:31.938258    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.938284    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:31.938284    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:31.938284    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:32.000115    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:32.000115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:32.039351    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:32.039351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:32.128849    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:32.128849    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:32.128849    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:32.155670    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:32.155670    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:28.083644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:34.707099    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:34.732689    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:34.763625    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.763625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:34.767349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:34.797435    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.797435    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:34.801415    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:34.828785    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.828785    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:34.832654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:34.864748    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.864748    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:34.868392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:34.896365    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.896365    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:34.900474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:34.932681    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.932681    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:34.936571    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:34.966056    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.966056    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:34.969208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:34.998362    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.998362    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:34.998362    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:34.998362    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:35.036977    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:35.036977    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:35.134841    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:35.134841    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:35.134841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:35.162429    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:35.162429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:35.213960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:35.214015    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:37.779857    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:37.806799    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:37.840730    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.840730    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:37.846443    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:37.875504    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.875504    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:37.879215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:37.910068    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.910068    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:37.913551    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:37.942897    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.942897    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:37.946741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:37.978321    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.978321    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:37.982267    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:38.008421    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.008421    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:38.013043    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:38.043041    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.043041    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:38.049737    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:38.082117    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.082117    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:38.082117    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:38.082117    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:38.148970    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:38.148970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:38.189697    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:38.189697    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:38.276122    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:38.276122    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:38.276122    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:38.304355    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:38.304355    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:40.862712    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:40.889041    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:40.921169    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.921169    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:40.924297    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:40.956313    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.956356    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:40.960294    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:40.990144    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.990144    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:40.993876    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:41.026732    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.026803    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:41.030745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:41.073825    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.073825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:41.078152    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:41.105859    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.105859    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:41.111714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:41.143286    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.143324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:41.146776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:41.176314    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.176345    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:41.176345    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:41.176345    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:41.213266    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:41.213266    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:41.300305    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:41.300305    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:41.300305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:41.328560    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:41.328621    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:41.375953    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:41.375953    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:38.119927    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:43.941613    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:43.967455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:44.000199    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.000199    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:44.003568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:44.035058    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.035058    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:44.040590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:44.083687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.083687    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:44.087476    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:44.115776    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.115776    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:44.119318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:44.155471    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.155513    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:44.159433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:44.191599    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.191636    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:44.195145    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:44.228181    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.228211    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:44.231971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:44.259687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.259763    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:44.259763    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:44.259763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:44.323705    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:44.323705    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:44.365401    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:44.365401    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:44.453893    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:44.453893    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:44.453893    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:44.480694    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:44.480694    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.042501    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:47.067663    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:47.108433    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.108433    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:47.112206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:47.144336    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.144336    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:47.148449    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:47.182968    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.183049    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:47.186614    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:47.215738    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.215738    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:47.219595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:47.248444    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.248511    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:47.252434    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:47.280975    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.280975    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:47.284966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:47.317178    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.317178    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:47.321223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:47.352638    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.352638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:47.352638    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:47.352638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:47.390049    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:47.390049    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:47.479425    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:47.479425    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:47.479425    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:47.505331    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:47.505331    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.556431    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:47.556431    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.124255    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:50.151100    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:50.184499    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.184565    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:50.187696    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:50.221764    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.221764    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:50.225471    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:50.253823    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.253823    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:50.260470    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:50.289768    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.289815    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:50.295283    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:50.321597    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.321597    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:50.325774    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:50.356707    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.356707    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:50.360685    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:50.390099    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.390099    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:50.393971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:50.420950    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.420950    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:50.420950    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:50.420950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.484730    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:50.484730    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:50.523997    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:50.523997    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:50.618256    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:50.618256    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:50.618256    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:50.645077    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:50.645077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:48.158175    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:53.200622    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:53.223348    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:53.253589    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.253589    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:53.258688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:53.287647    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.287689    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:53.291555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:53.324358    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.324403    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:53.327650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:53.355417    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.355417    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:53.359780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:53.390012    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.390012    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:53.393536    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:53.420636    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.420672    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:53.424429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:53.453665    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.453744    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:53.456764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:53.486769    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.486836    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:53.486875    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:53.486875    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:53.552513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:53.552513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:53.593054    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:53.593054    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:53.683171    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:53.683207    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:53.683230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:53.712513    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:53.712513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:56.288600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:56.314380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:56.347447    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.347447    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:56.351158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:56.381779    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.381779    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:56.385232    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:56.423000    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.423000    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:56.427083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:56.456635    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.456635    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:56.460509    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:56.490868    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.490868    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:56.496594    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:56.523671    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.523671    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:56.527847    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:56.559992    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.559992    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:56.565352    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:56.591708    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.591708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:56.591708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:56.591708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:56.656572    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:56.656572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:56.696334    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:56.696334    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:56.788411    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:56.788411    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:56.788411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:56.815762    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:56.815762    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.370676    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:59.404615    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:59.440735    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.440735    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:59.446758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:59.475209    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.475209    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:59.479521    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:59.509465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.509465    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:59.513228    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:59.542409    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.542409    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:59.546008    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:59.575778    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.575778    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:59.579759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:59.613465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.613465    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:59.617266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:59.645245    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.645245    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:59.649170    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:59.680413    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.680449    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:59.680449    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:59.680449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:59.713987    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:59.713987    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.764930    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:59.764994    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:59.832077    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:59.832077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:59.870681    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:59.870681    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:59.953336    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:10:58.200115    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:11:02.457745    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:02.492666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:02.526665    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.526665    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:02.530862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:02.560353    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.560413    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:02.564099    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:02.595430    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.595430    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:02.599884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:02.629744    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.629744    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:02.633637    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:02.662623    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.662623    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:02.666817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:02.694696    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.694696    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:02.698194    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:02.727384    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.727442    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:02.731483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:02.766114    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.766114    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:02.766114    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:02.766114    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:02.830755    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:02.830755    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:02.870216    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:02.870216    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:02.958327    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.958327    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:02.958380    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:02.984980    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:02.984980    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.540158    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:05.564812    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:05.595638    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.595638    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:05.599748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:05.628748    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.628748    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:05.632878    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:05.666232    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.666257    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:05.670293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:05.699654    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.699654    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:05.703004    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:05.733113    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.733113    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:05.737096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:05.765591    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.765639    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:05.770398    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:05.796360    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.796360    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:05.800240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:05.829847    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.829914    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:05.829914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:05.829945    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.880789    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:05.880789    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:05.943002    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:05.943002    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:05.983389    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:05.983389    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.076023    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.076023    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:06.076023    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:08.608606    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:08.632215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:08.665017    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.665017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:08.669299    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:08.695355    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.695355    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:08.699306    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:08.729054    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.729054    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:08.732454    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:08.759881    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.759881    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:08.764328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:08.793695    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.793777    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:08.797908    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:08.826225    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.826225    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:08.829679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:08.859645    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.859645    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:08.863083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:08.893657    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.893657    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:08.893657    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:08.893657    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:08.958163    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:08.958163    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:08.997418    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:08.997418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.087973    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.087973    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:09.087973    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:09.115687    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.115687    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:11.697770    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.725676    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:11.758809    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.758809    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:11.762929    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:11.794198    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.794198    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:11.798023    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:11.828890    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.828890    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:11.833358    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:11.865217    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.865217    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:11.868915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:11.897672    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.897672    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:11.901235    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:11.931725    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.931808    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:11.935264    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:11.966263    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.966263    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:11.970422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:11.999856    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.999856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:11.999856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:11.999856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.064137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.064137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.102491    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.102491    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.183568    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.183568    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:12.183568    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:12.212178    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.212178    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:11:08.241744    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:16.871278    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 02:11:16.871278    6768 node_ready.go:38] duration metric: took 6m0.0008728s for node "no-preload-184000" to be "Ready" ...
	I1217 02:11:16.874572    6768 out.go:203] 
	W1217 02:11:16.876457    6768 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:11:16.876457    6768 out.go:285] * 
	W1217 02:11:16.879042    6768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:11:16.881673    6768 out.go:203] 
	I1217 02:11:14.772821    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.797656    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:14.826900    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.826900    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.829894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:14.859202    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.859202    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.862783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:14.891414    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.891414    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.895052    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:14.925404    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.925404    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:14.928966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:14.959295    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.959330    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:14.962893    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:14.991696    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.991730    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:14.994776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:15.025468    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.025468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.031674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:15.060661    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.060661    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.060733    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.060733    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.120513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.120513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.159608    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.159608    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.244418    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.244418    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:15.244418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:15.271288    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.271288    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.830556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.850600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:17.886696    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.886696    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.890674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:17.921702    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.921702    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.924697    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:17.952692    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.952692    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.956701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:17.984691    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.984691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.988655    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:18.024626    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.024663    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:18.028558    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:18.060310    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.060310    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.064024    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:18.100124    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.100124    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.104105    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:18.141223    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.141223    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.141223    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.141223    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.179686    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.179686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.311240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.311240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:18.311240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:18.342566    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.342615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.393872    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.393872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.977693    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:21.006733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:21.035136    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.035201    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:21.039202    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:21.069636    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.069636    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:21.075448    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:21.105437    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.105437    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:21.108735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:21.136602    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.136602    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:21.140124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:21.168674    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.168674    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:21.172368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:21.204723    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.204723    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:21.208123    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:21.237130    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.237130    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:21.240654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:21.268170    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.268170    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:21.268170    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:21.268170    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:21.333642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:21.333642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:21.372230    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:21.372230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:21.467012    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:21.467012    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:21.467012    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:21.495867    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:21.495867    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.053568    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:24.079587    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:24.110362    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.110399    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:24.113326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:24.141818    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.141818    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:24.145313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:24.172031    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.172031    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:24.176197    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:24.205114    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.205133    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:24.208437    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:24.238244    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.238244    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:24.242692    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:24.271687    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.271687    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:24.276384    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:24.307922    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.307922    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:24.311538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:24.350108    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.350108    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:24.350108    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:24.350108    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.402159    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:24.402224    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:24.463824    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:24.463824    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:24.503645    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:24.503645    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:24.591969    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:24.591969    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:24.591969    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.123965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:27.157839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:27.199991    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.199991    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:27.204206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:27.231981    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.231981    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:27.235568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:27.265668    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.265668    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:27.269162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:27.299488    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.299488    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:27.303277    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:27.335769    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.335769    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:27.339516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:27.369112    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.369112    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:27.372881    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:27.402031    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.402031    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:27.405780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:27.436610    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.436610    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:27.436610    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:27.436610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:27.523394    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:27.523917    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:27.523957    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.552476    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:27.552476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:27.607026    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:27.607078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:27.670834    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:27.670834    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.216027    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:30.241711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:30.272275    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.272275    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:30.276071    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:30.304635    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.304635    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:30.307639    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:30.340374    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.340374    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:30.343758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:30.374162    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.374162    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:30.378010    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:30.407836    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.407836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:30.411411    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:30.440002    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.440002    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:30.443429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:30.472647    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.472647    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:30.476538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:30.510744    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.510744    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:30.510744    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:30.510744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:30.575069    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:30.575156    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:30.639732    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:30.640731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.685195    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:30.685195    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:30.775246    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:30.775295    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:30.775295    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.308109    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:33.334329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:33.365061    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.365061    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:33.370854    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:33.399488    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.399488    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:33.406335    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:33.436434    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.436434    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:33.439783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:33.468947    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.468947    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:33.474014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:33.502568    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.502568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:33.506146    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:33.535706    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.535706    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:33.540016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:33.573811    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.573811    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:33.577712    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:33.606321    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.606321    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:33.606321    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:33.606321    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:33.671884    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:33.671884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:33.712095    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:33.712095    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:33.800767    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:33.800848    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:33.800884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.829402    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:33.829474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:36.410236    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:36.438912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:36.468229    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.468229    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:36.472231    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:36.501220    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.501220    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:36.506462    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:36.539556    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.539556    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:36.543603    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:36.584367    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.584367    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:36.588513    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:36.620670    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.620670    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:36.626030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:36.654239    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.654239    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:36.658962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:36.689023    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.689023    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:36.693754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:36.721351    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.721351    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:36.721351    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:36.721351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:36.787832    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:36.787832    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:36.828019    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:36.828019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:36.916923    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:36.916923    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:36.916923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:36.946231    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:36.946265    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.498459    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:39.522909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:39.553462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.553462    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:39.557524    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:39.585462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.585462    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:39.591342    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:39.619332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.619399    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:39.623096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:39.651071    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.651071    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:39.654766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:39.683502    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.683502    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:39.687390    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:39.715332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.715332    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:39.718932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:39.749019    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.749019    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:39.752739    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:39.783378    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.783378    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:39.783378    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:39.783378    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.835019    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:39.835019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:39.899542    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:39.899542    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:39.938717    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:39.938717    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:40.026359    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:40.026403    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:40.026446    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:42.561805    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:42.585507    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:42.613091    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.613091    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:42.616991    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:42.647608    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.647608    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:42.651380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:42.680540    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.680540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:42.683625    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:42.717014    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.717014    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:42.721369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:42.750017    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.750017    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:42.753961    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:42.785164    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.785164    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:42.788883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:42.817424    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.817424    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:42.821266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:42.853247    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.853247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:42.853247    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:42.853247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:42.910034    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:42.910052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:42.970436    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:42.970436    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:43.009833    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:43.010830    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:43.102803    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:43.102803    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:43.102803    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.636418    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:45.661677    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:45.695141    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.695141    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:45.699189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:45.729376    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.729376    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:45.733753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:45.764365    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.764365    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:45.767917    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:45.799287    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.799287    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:45.802968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:45.835270    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.835270    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:45.838766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:45.868660    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.868660    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:45.875727    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:45.903566    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.903566    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:45.907562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:45.937452    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.937452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:45.937452    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:45.937452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.965091    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:45.965091    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:46.013173    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:46.013173    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:46.077113    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:46.077113    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:46.118527    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:46.118527    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:46.207662    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:48.714055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:48.741412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:48.772767    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.772767    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:48.776092    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:48.804946    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.805020    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:48.808538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:48.837488    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.837488    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:48.840453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:48.871139    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.871139    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:48.875518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:48.904264    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.904264    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:48.911351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:48.939118    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.939118    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:48.943340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:48.970934    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.970934    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:48.974990    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:49.005140    6296 logs.go:282] 0 containers: []
	W1217 02:11:49.005174    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:49.005205    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:49.005234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:49.075925    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:49.075925    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:49.116144    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:49.116144    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:49.196968    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:49.197074    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:49.197074    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:49.222883    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:49.223404    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:51.783312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:51.809151    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:51.839751    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.839751    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:51.844016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:51.895178    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.895178    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:51.899341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:51.930311    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.930311    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:51.933797    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:51.961857    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.961857    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:51.966036    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:51.993647    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.993647    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:51.997672    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:52.026485    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.026485    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:52.032726    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:52.062039    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.062039    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:52.066379    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:52.096772    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.096772    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:52.096772    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:52.096772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:52.163369    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:52.163369    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:52.203719    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:52.203719    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:52.295324    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:52.295324    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:52.295324    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:52.323234    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:52.323234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:54.878824    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:54.907441    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:54.944864    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.944864    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:54.948030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:54.980769    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.980769    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:54.987506    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:55.019726    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.019726    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:55.024226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:55.052618    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.052618    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:55.056658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:55.085528    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.085607    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:55.089212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:55.120453    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.120525    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:55.124591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:55.154725    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.154749    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:55.157707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:55.187692    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.187692    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:55.187692    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:55.187692    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:55.252848    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:55.252848    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:55.318197    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:55.318197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:55.358145    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:55.358145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:55.439213    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:55.439213    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:55.439744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:57.972346    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:57.997412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:58.029794    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.029794    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:58.033582    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:58.064729    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.064729    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:58.068722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:58.103854    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.103854    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:58.107069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:58.140767    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.140767    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:58.145080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:58.172792    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.172792    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:58.177038    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:58.205809    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.205809    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:58.209371    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:58.236353    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.236353    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:58.240621    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:58.269469    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.269469    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:58.269469    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:58.269469    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:58.324960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:58.324960    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:58.384708    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:58.384708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:58.423476    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:58.423476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:58.512328    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:58.512387    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:58.512387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.044354    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:01.073699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:01.104765    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.104836    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:01.107915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:01.141131    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.141131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:01.145209    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:01.174536    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.174536    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:01.178187    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:01.209172    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.209172    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:01.212803    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:01.241435    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.241486    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:01.245545    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:01.277115    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.277115    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:01.281366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:01.312158    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.312158    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:01.316725    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:01.343220    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.343220    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:01.343220    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:01.343220    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:01.382233    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:01.382233    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:01.487570    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:01.488578    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:01.488578    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.514572    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:01.514572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:01.567754    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:01.567754    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.140604    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:04.165376    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:04.197379    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.197379    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:04.202896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:04.231436    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.231506    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:04.235354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:04.267960    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.267960    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:04.271789    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:04.301108    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.301108    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:04.305219    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:04.334515    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.334515    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:04.338693    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:04.366071    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.366071    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:04.369958    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:04.398457    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.398457    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:04.405087    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:04.432495    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.432495    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:04.432495    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:04.432495    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.492454    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:04.492454    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:04.530878    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:04.530878    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:04.615739    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:04.615739    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:04.615739    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:04.643270    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:04.643304    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.195429    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:07.221998    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:07.254842    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.254842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:07.258578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:07.291820    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.291820    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:07.297979    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:07.329603    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.329603    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:07.334181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:07.363276    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.363324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:07.367248    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:07.394630    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.394695    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:07.398679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:07.425998    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.425998    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:07.429814    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:07.458824    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.458878    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:07.462682    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:07.490543    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.490614    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:07.490614    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:07.490614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:07.575806    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:07.575806    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:07.576816    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:07.607910    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:07.607910    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.659155    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:07.659155    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:07.722240    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:07.722240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.270711    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:10.295753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:10.324920    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.324920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:10.328903    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:10.358180    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.358218    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:10.362249    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:10.390135    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.390135    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:10.393738    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:10.423058    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.423090    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:10.426534    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:10.456745    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.456745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:10.463439    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:10.493765    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.493765    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:10.497858    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:10.526425    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.526425    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:10.532217    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:10.563338    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.563338    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:10.563338    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:10.563338    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:10.627669    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:10.627669    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.666455    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:10.666455    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:10.755613    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:10.755613    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:10.755613    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:10.786516    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:10.787045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:13.342631    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:13.368870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:13.402304    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.402347    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:13.408012    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:13.436633    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.436710    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:13.439877    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:13.468754    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.469007    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:13.473752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:13.505247    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.505324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:13.509766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:13.538745    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.538745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:13.542743    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:13.571986    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.571986    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:13.575522    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:13.604002    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.604002    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:13.608063    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:13.636028    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.636028    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:13.636028    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:13.636028    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:13.701418    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:13.701418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:13.740729    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:13.740729    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:13.830687    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:13.830746    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:13.830768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:13.856732    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:13.856732    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.415071    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:16.441827    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:16.474920    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.474920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:16.478560    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:16.509149    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.509149    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:16.512927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:16.544114    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.544114    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:16.547867    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:16.578111    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.578111    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:16.581776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:16.610586    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.610586    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:16.614807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:16.644103    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.644103    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:16.647954    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:16.692289    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.692289    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:16.696153    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:16.727229    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.727229    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:16.727229    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:16.727229    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:16.823236    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:16.823236    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:16.823236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:16.849827    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:16.849827    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.905388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:16.905414    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:16.965153    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:16.965153    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.511192    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:19.537347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:19.568920    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.568920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:19.573318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:19.604587    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.604587    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:19.608244    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:19.637707    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.637732    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:19.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:19.669047    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.669047    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:19.672932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:19.703243    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.703243    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:19.706862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:19.738948    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.738948    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:19.742483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:19.773620    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.773620    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:19.777766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:19.807218    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.807218    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:19.807218    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:19.807218    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:19.872750    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:19.872750    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.912835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:19.912835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:19.997398    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:19.997398    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:19.997398    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:20.025629    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:20.025629    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:22.593289    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:22.619754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:22.652929    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.652929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:22.657635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:22.689768    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.689846    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:22.693504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:22.720087    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.720087    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:22.723840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:22.752902    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.752959    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:22.757109    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:22.787369    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.787369    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:22.791584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:22.822117    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.822117    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:22.825675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:22.856022    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.856022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:22.859609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:22.886982    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.886982    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:22.886982    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:22.886982    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:22.972988    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:22.972988    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:22.972988    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:23.002037    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:23.002037    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:23.061548    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:23.061548    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:23.124352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:23.124352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:25.670974    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:25.706279    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:25.741150    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.741150    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:25.745079    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:25.773721    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.773782    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:25.779777    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:25.808516    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.808516    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:25.813011    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:25.844755    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.844755    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:25.848591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:25.877332    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.877332    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:25.881053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:25.907973    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.907973    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:25.914424    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:25.941138    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.941138    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:25.945025    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:25.974760    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.974760    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:25.974760    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:25.974760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:26.012354    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:26.012354    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:26.113177    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:26.113177    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:26.113177    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:26.144162    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:26.144245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:26.194605    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:26.195138    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:28.763811    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:28.789762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:28.820544    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.820544    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:28.824807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:28.855728    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.855728    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:28.860354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:28.894655    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.894655    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:28.898069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:28.928310    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.928394    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:28.932124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:28.967209    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.967209    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:28.973126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:29.002975    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.003024    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:29.006839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:29.044805    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.044881    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:29.049158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:29.078108    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.078142    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:29.078174    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:29.078202    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:29.142751    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:29.142751    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:29.182082    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:29.182082    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:29.271566    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:29.271596    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:29.271643    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:29.299332    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:29.299332    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:31.856743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:31.882741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:31.912323    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.912372    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:31.917046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:31.948587    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.948631    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:31.952095    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:31.981682    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.981682    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:31.985888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:32.022173    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.022173    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:32.026061    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:32.070026    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.070026    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:32.074016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:32.105255    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.105255    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:32.109062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:32.140873    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.140947    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:32.143941    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:32.172848    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.172876    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:32.172876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:32.172876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:32.237207    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:32.237207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:32.275838    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:32.275838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:32.360656    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:32.360656    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:32.360656    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:32.391099    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:32.391099    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:34.970955    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:35.002200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:35.036658    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.036658    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:35.041208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:35.068998    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.068998    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:35.075758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:35.105253    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.105253    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:35.109356    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:35.137411    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.137411    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:35.141289    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:35.168542    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.168542    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:35.174717    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:35.204677    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.204677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:35.209675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:35.240901    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.240901    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:35.244034    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:35.276453    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.276453    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:35.276453    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:35.276453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:35.341158    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:35.341158    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:35.381822    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:35.381822    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:35.472890    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:35.472890    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:35.472890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:35.501374    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:35.501374    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.054644    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:38.080787    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:38.112397    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.112420    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:38.116070    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:38.144341    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.144396    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:38.148080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:38.177159    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.177159    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:38.181253    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:38.210000    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.210000    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:38.215709    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:38.243526    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.243526    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:38.247620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:38.278443    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.278443    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:38.282504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:38.314414    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.314414    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:38.317968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:38.345306    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.345306    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:38.345306    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:38.345412    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:38.425240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:38.425240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:38.425240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:38.455129    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:38.455129    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.514775    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:38.514775    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:38.574255    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:38.574255    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.116537    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:41.139650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:41.169726    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.169814    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:41.173285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:41.204812    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.204812    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:41.208892    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:41.235980    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.235980    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:41.240200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:41.271415    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.271415    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:41.275005    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:41.303967    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.303967    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:41.309707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:41.340401    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.340401    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:41.343688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:41.374008    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.374008    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:41.377325    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:41.409502    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.409563    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:41.409563    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:41.409610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:41.472168    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:41.472168    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.513098    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:41.513098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:41.601716    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:41.601716    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:41.601716    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:41.629092    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:41.629148    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:44.185012    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:44.210566    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:44.242274    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.242274    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:44.248762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:44.280241    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.280307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:44.283818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:44.312929    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.312997    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:44.316643    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:44.343840    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.343840    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:44.347619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:44.378547    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.378547    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:44.382595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:44.410908    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.410908    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:44.414686    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:44.448329    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.448329    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:44.453888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:44.484842    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.484842    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:44.484842    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:44.484842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:44.550740    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:44.550740    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:44.589666    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:44.589666    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:44.677625    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:44.677625    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:44.677625    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:44.706051    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:44.706051    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:47.257477    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:47.286845    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:47.315563    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.315563    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:47.319220    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:47.351319    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.351319    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:47.354946    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:47.382237    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.382237    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:47.386106    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:47.415608    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.415608    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:47.419575    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:47.449212    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.449241    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:47.452978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:47.482356    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.482356    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:47.486511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:47.518156    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.518205    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:47.522254    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:47.550631    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.550631    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:47.550631    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:47.550727    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:47.615950    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:47.615950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:47.655928    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:47.655928    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:47.744126    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:47.744164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:47.744210    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:47.773502    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:47.773502    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.331328    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:50.368555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:50.407443    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.407443    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:50.411798    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:50.440520    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.440544    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:50.444430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:50.478050    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.478050    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:50.481848    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:50.513603    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.513658    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:50.517565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:50.551935    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.552946    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:50.556641    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:50.591171    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.591171    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:50.594981    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:50.624821    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.624821    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:50.628756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:50.661209    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.661209    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:50.661209    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:50.661209    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:50.693141    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:50.693141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.746322    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:50.746322    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:50.805974    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:50.805974    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:50.844572    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:50.844572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:50.935133    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.441690    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:53.466017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:53.494846    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.494846    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:53.499634    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:53.530839    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.530839    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:53.534661    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:53.567189    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.567189    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:53.571412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:53.598763    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.598763    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:53.602673    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:53.629791    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.629791    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:53.632953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:53.662323    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.662323    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:53.665394    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:53.695745    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.695745    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:53.701403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:53.735348    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.735348    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:53.735348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:53.735348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:53.816532    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.816532    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:53.816532    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:53.843453    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:53.843453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:53.893853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:53.893853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:53.954759    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:53.954759    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.499506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:56.525316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:56.561689    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.561738    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:56.565616    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:56.594009    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.594009    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:56.599822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:56.624101    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.624101    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:56.628604    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:56.657977    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.658063    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:56.663240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:56.694316    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.694316    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:56.698763    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:56.728527    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.728527    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:56.734446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:56.765315    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.765315    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:56.769182    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:56.796198    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.796198    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:56.796198    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:56.796198    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:56.864777    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:56.864777    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.904264    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:56.904264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:57.000434    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:57.000434    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:57.000434    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:57.034757    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:57.034842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:59.601768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:59.627731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:59.657009    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.657009    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:59.660962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:59.690428    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.690428    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:59.694181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:59.723517    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.723592    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:59.727191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:59.756251    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.756251    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:59.759627    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:59.791516    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.791516    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:59.795602    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:59.828192    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.828192    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:59.832003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:59.860258    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.860258    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:59.863635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:59.893207    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.893207    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:59.893207    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:59.893207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:59.958927    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:59.958927    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:00.004703    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:00.004703    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:00.096612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:00.096612    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:00.096612    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:00.124914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:00.124975    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:02.682962    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:02.708543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:02.737663    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.737663    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:02.741817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:02.772482    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.772482    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:02.778562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:02.806978    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.806978    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:02.813021    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:02.845688    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.845688    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:02.851578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:02.880144    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.880200    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:02.883811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:02.918466    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.918544    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:02.922186    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:02.951702    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.951702    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:02.955491    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:02.984638    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.984638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:02.984638    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:02.984638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:03.047941    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:03.047941    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:03.086964    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:03.086964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:03.173007    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:03.173086    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:03.173086    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:03.202017    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:03.202544    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:05.761010    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:05.786319    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:05.819785    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.819785    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:05.825532    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:05.853318    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.853318    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:05.858274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:05.887613    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.887613    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:05.891162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:05.919471    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.919471    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:05.922933    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:05.955441    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.955441    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:05.959241    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:05.984925    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.984925    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:05.989009    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:06.021101    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.021101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:06.024383    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:06.055098    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.055098    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:06.055098    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:06.055098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:06.107743    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:06.107743    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:06.170319    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:06.170319    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:06.210360    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:06.210360    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:06.299194    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:06.299194    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:06.299194    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:08.832901    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:08.860263    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:08.890111    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.890111    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:08.893617    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:08.921989    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.921989    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:08.925561    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:08.952883    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.952883    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:08.959516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:08.991347    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.991347    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:08.995066    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:09.028011    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.028011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:09.032096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:09.060803    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.060803    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:09.064596    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:09.093542    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.093572    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:09.096987    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:09.123594    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.123615    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:09.123615    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:09.123615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:09.176222    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:09.176222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:09.238935    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:09.238935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:09.278804    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:09.278804    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:09.367283    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:09.367283    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:09.367283    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:11.901781    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:11.930493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:11.963534    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.963534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:11.967747    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:11.997700    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.997700    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:12.001601    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:12.031862    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.031862    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:12.035544    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:12.066506    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.066506    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:12.071472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:12.103184    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.103184    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:12.107033    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:12.135713    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.135713    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:12.139268    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:12.170350    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.170350    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:12.174053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:12.202964    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.202964    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:12.202964    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:12.202964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:12.252669    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:12.253197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:12.318088    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:12.318088    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:12.356768    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:12.356768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:12.443857    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:12.443857    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:12.443857    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:14.980350    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:15.007303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:15.040020    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.040100    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:15.043303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:15.073502    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.073502    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:15.077944    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:15.106871    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.106871    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:15.110453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:15.138071    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.138095    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:15.141547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:15.171602    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.171659    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:15.175341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:15.207140    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.207181    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:15.210547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:15.243222    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.243222    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:15.247103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:15.280156    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.280232    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:15.280232    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:15.280232    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:15.342862    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:15.342862    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:15.384022    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:15.384022    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:15.469724    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:15.469766    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:15.469807    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:15.497606    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:15.497667    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:18.064895    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:18.090410    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:18.123378    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.123429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:18.127331    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:18.157210    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.157210    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:18.160924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:18.191242    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.191242    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:18.195064    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:18.222561    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.222561    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:18.226125    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:18.255891    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.255891    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:18.259860    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:18.288868    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.288868    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:18.292834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:18.322668    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.322668    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:18.325666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:18.353052    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.353052    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:18.353052    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:18.353052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:18.418504    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:18.418504    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:18.457348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:18.457348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:18.568946    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:18.569003    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:18.569003    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:18.602236    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:18.602236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:21.158752    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:21.184475    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:21.214582    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.214582    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:21.218375    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:21.245604    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.245604    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:21.249850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:21.281360    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.281360    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:21.286501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:21.318549    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.318601    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:21.322609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:21.353429    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.353460    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:21.357031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:21.391028    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.391028    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:21.394206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:21.423584    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.423584    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:21.427599    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:21.458683    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.458683    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:21.458683    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:21.458683    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:21.526430    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:21.526430    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:21.565490    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:21.565490    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:21.656323    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:21.656323    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:21.656323    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:21.689700    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:21.689700    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:24.246630    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:24.280925    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:24.322972    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.322972    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:24.326768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:24.355732    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.355732    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:24.359957    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:24.391937    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.392009    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:24.395559    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:24.427388    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.427388    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:24.431126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:24.459891    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.459966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:24.463468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:24.491009    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.491009    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:24.494465    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:24.524468    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.524468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:24.528017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:24.568815    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.568815    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:24.568815    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:24.568815    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:24.632772    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:24.632772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:24.671731    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:24.671731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:24.755604    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:24.755604    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:24.755604    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:24.784599    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:24.784660    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:27.338272    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:27.366367    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:27.395715    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.395715    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:27.399158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:27.427362    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.427362    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:27.430752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:27.461990    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.461990    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:27.465748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:27.492985    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.492985    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:27.497176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:27.528724    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.528724    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:27.532970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:27.571655    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.571655    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:27.575466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:27.604007    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.604068    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:27.608062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:27.639624    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.639689    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:27.639735    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:27.639735    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:27.705896    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:27.705896    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:27.745294    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:27.745294    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:27.827462    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:27.827462    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:27.827462    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:27.854463    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:27.854559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:30.412283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:30.438474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:30.469848    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.469848    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:30.473330    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:30.501713    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.501713    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:30.505748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:30.535870    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.535870    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:30.540177    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:30.572310    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.572310    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:30.576644    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:30.607087    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.607087    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:30.610334    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:30.640168    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.640168    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:30.643628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:30.671132    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.671132    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:30.677927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:30.708536    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.708536    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:30.708536    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:30.708536    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:30.773222    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:30.773222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:30.812763    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:30.812763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:30.932347    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:30.932397    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:30.932444    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:30.961663    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:30.961663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.524404    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:33.548624    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:33.580753    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.580845    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:33.583912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:33.613001    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.613048    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:33.616808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:33.645262    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.645262    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:33.649044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:33.677477    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.677562    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:33.681205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:33.710607    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.710669    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:33.714600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:33.742889    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.742889    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:33.746623    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:33.777022    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.777022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:33.780455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:33.809525    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.809525    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:33.809525    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:33.809525    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.860852    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:33.860936    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:33.924768    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:33.924768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:33.962632    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:33.962632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:34.054124    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:34.054124    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:34.054124    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.589465    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:36.617658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:36.652432    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.652432    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:36.656189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:36.694709    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.694709    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:36.700040    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:36.729913    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.729913    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:36.733870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:36.762591    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.762591    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:36.766493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:36.796414    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.796414    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:36.800540    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:36.828148    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.828148    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:36.833323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:36.862390    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.862390    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:36.866173    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:36.895727    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.895814    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:36.895814    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:36.895814    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.926240    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:36.926240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:36.975760    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:36.975760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:37.036350    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:37.036350    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:37.072745    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:37.072745    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:37.161612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:39.667288    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:39.691212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:39.724148    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.724148    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:39.727935    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:39.761821    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.761821    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:39.765852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:39.793659    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.793696    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:39.797422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:39.825439    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.825473    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:39.828751    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:39.859011    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.859011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:39.862518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:39.891552    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.891613    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:39.894978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:39.926857    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.926857    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:39.930363    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:39.975835    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.975835    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:39.975835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:39.975835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:40.070107    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:40.070107    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:40.070107    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:40.098563    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:40.098605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:40.147476    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:40.147476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:40.212702    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:40.212702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:42.757339    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:42.786178    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:42.817429    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.817429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:42.821164    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:42.850363    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.850415    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:42.854031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:42.881774    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.881774    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:42.885802    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:42.915556    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.915556    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:42.919184    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:42.948329    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.948329    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:42.952430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:42.982355    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.982355    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:42.986768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:43.017700    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.017700    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:43.021284    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:43.052749    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.052779    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:43.052779    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:43.052813    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:43.091605    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:43.091605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:43.175861    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:43.175861    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:43.175861    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:43.204569    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:43.204569    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:43.257132    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:43.257132    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:45.825092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:45.853653    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:45.886780    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.886780    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:45.890416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:45.921840    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.923184    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:45.928382    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:45.960187    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.960252    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:45.963959    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:45.993658    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.993712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:45.997113    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:46.024308    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.024308    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:46.027994    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:46.060725    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.060725    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:46.064446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:46.092825    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.092825    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:46.098256    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:46.129614    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.129688    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:46.129688    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:46.129688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:46.216242    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:46.216263    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:46.216263    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:46.248767    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:46.248767    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:46.298044    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:46.298044    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:46.363186    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:46.363186    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:48.911992    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:48.946588    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:48.983880    6296 logs.go:282] 0 containers: []
	W1217 02:13:48.983880    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:48.987999    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:49.017254    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.017254    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:49.021239    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:49.053619    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.053619    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:49.057711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:49.086289    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.086289    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:49.090230    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:49.123069    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.123069    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:49.130107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:49.158724    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.158724    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:49.162733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:49.193515    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.193573    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:49.197116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:49.230153    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.230201    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:49.230245    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:49.230245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:49.259747    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:49.259747    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:49.312360    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:49.312456    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:49.375035    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:49.375035    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:49.413908    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:49.413908    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:49.508187    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:52.012834    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:52.037104    6296 out.go:203] 
	W1217 02:13:52.039462    6296 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:13:52.039520    6296 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:13:52.039588    6296 out.go:285] * Related issues:
	W1217 02:13:52.039588    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:13:52.039635    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:13:52.041923    6296 out.go:203] 
	
	
	==> Docker <==
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700732008Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700826718Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700839319Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700844420Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700849520Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700872823Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700996336Z" level=info msg="Initializing buildkit"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.801833124Z" level=info msg="Completed buildkit initialization"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807448530Z" level=info msg="Daemon has completed initialization"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807644551Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807743662Z" level=info msg="API listen on [::]:2376"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807662953Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 02:07:46 newest-cni-383500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 02:07:47 newest-cni-383500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Loaded network plugin cni"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 02:07:47 newest-cni-383500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:56.145291   19356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:56.146460   19356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:56.147461   19356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:56.149604   19356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:56.151316   19356 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.752411] CPU: 12 PID: 469779 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8b9b910b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f8b9b910af6.
	[  +0.000001] RSP: 002b:00007fffc85e9670 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.875329] CPU: 10 PID: 469916 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fdfac8dab20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fdfac8daaf6.
	[  +0.000001] RSP: 002b:00007ffd587a0060 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:13:56 up  2:33,  0 user,  load average: 1.16, 0.96, 2.07
	Linux newest-cni-383500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:13:52 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:53 newest-cni-383500 kubelet[19187]: E1217 02:13:53.069253   19187 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:13:53 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:13:53 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:13:53 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 487.
	Dec 17 02:13:53 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:53 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:53 newest-cni-383500 kubelet[19201]: E1217 02:13:53.877263   19201 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:13:53 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:13:53 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:13:54 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 488.
	Dec 17 02:13:54 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:54 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:54 newest-cni-383500 kubelet[19228]: E1217 02:13:54.806556   19228 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:13:54 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:13:54 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:13:55 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 489.
	Dec 17 02:13:55 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:55 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:55 newest-cni-383500 kubelet[19241]: E1217 02:13:55.562871   19241 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:13:55 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:13:55 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:13:56 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 490.
	Dec 17 02:13:56 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:13:56 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (582.4999ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-383500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/SecondStart (381.39s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1217 02:11:29.541373    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:11:52.419849    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:11:56.853279    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:12:01.758807    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:12:28.702661    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:12:33.947418    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:13:04.330212    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:13:07.206517    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:13:14.177075    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:13:46.401193    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:14:27.403260    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:15:06.468494    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:15:09.471927    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:15:13.043901    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-044000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:15:22.407174    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:15:33.771902    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:15:38.685305    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:16:00.971054    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-278200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:16:05.633750    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:16:36.112573    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-044000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:16:52.423427    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:17:24.042404    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-278200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:17:33.952907    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:18:04.335566    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:18:07.211646    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:18:14.182204    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:18:46.405299    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:19:01.857110    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:20:06.473627    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:20:13.047739    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\old-k8s-version-044000\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:272: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:272: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
start_stop_delete_test.go:272: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 2 (601.4756ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:272: status error: exit status 2 (may be ok)
start_stop_delete_test.go:272: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:273: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-184000
helpers_test.go:244: (dbg) docker inspect no-preload-184000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed",
	        "Created": "2025-12-17T01:54:01.802457191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454689,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:05:04.431751717Z",
	            "FinishedAt": "2025-12-17T02:05:01.217443908Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hosts",
	        "LogPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed-json.log",
	        "Name": "/no-preload-184000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-184000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-184000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-184000",
	                "Source": "/var/lib/docker/volumes/no-preload-184000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-184000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-184000",
	                "name.minikube.sigs.k8s.io": "no-preload-184000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd75d9fe5c78c005b0249a246e3b62cf2a8873f5a0bf590eec1667b2401d46f3",
	            "SandboxKey": "/var/run/docker/netns/cd75d9fe5c78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63566"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63569"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63565"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-184000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6adb91d102dfa92bfa154127e93e39401be06a5d21df5043f3e85e012e93e321",
	                    "EndpointID": "2717bfe6e1d6a16c3b3b21a01d0c25052321fa1d05a920cee0a218e0ea604d53",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-184000",
	                        "335cbfb80690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 2 (601.1466ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25
E1217 02:20:22.411547    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25: (1.7200103s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ stop    │ -p newest-cni-383500 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-383500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │                     │
	│ image   │ newest-cni-383500 image list --format=json                                                                                                                                                                                 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ pause   │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ unpause │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	│ delete  │ -p newest-cni-383500                                                                                                                                                                                                       │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	│ delete  │ -p newest-cni-383500                                                                                                                                                                                                       │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:07:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:07:37.336708    6296 out.go:360] Setting OutFile to fd 968 ...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.380113    6296 out.go:374] Setting ErrFile to fd 1700...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.394455    6296 out.go:368] Setting JSON to false
	I1217 02:07:37.396490    6296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8845,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:07:37.397485    6296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:07:37.401853    6296 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:07:37.405009    6296 notify.go:221] Checking for updates...
	I1217 02:07:37.407761    6296 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:37.412054    6296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:07:37.415031    6296 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:07:37.416942    6296 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:07:37.418887    6296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 02:07:37.439676    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:37.422499    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:37.422499    6296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:07:37.541250    6296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:07:37.544536    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:37.790862    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:37.763465755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:37.793941    6296 out.go:179] * Using the docker driver based on existing profile
	I1217 02:07:37.795944    6296 start.go:309] selected driver: docker
	I1217 02:07:37.795944    6296 start.go:927] validating driver "docker" against &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:37.796941    6296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:07:37.881125    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:38.106129    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:38.085504737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:38.106129    6296 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:07:38.106129    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:38.106661    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:38.106789    6296 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:38.110370    6296 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 02:07:38.113499    6296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:07:38.115628    6296 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:07:38.118799    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:38.118867    6296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:07:38.118972    6296 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 02:07:38.119036    6296 cache.go:65] Caching tarball of preloaded images
	I1217 02:07:38.119094    6296 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 02:07:38.119094    6296 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 02:07:38.119094    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.197259    6296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:07:38.197259    6296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:07:38.197259    6296 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:07:38.197259    6296 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:07:38.197259    6296 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-383500"
	I1217 02:07:38.197259    6296 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:07:38.197259    6296 fix.go:54] fixHost starting: 
	I1217 02:07:38.204499    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.259240    6296 fix.go:112] recreateIfNeeded on newest-cni-383500: state=Stopped err=<nil>
	W1217 02:07:38.259240    6296 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:07:38.262335    6296 out.go:252] * Restarting existing docker container for "newest-cni-383500" ...
	I1217 02:07:38.265716    6296 cli_runner.go:164] Run: docker start newest-cni-383500
	I1217 02:07:38.804123    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.863188    6296 kic.go:430] container "newest-cni-383500" state is running.
	I1217 02:07:38.868900    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:38.924169    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.926083    6296 machine.go:94] provisionDockerMachine start ...
	I1217 02:07:38.928987    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:38.984001    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:38.984993    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:38.984993    6296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:07:38.986003    6296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:07:42.161557    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.161646    6296 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 02:07:42.166827    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.231443    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.231698    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.231698    6296 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 02:07:42.423907    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.432743    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.491085    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.491085    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.491085    6296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:07:42.667009    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:42.667009    6296 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:07:42.667009    6296 ubuntu.go:190] setting up certificates
	I1217 02:07:42.667009    6296 provision.go:84] configureAuth start
	I1217 02:07:42.671320    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:42.724474    6296 provision.go:143] copyHostCerts
	I1217 02:07:42.725072    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:07:42.725072    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:07:42.725072    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:07:42.726229    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:07:42.726229    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:07:42.726812    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:07:42.727386    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:07:42.727386    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:07:42.727386    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:07:42.728644    6296 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 02:07:42.882778    6296 provision.go:177] copyRemoteCerts
	I1217 02:07:42.886667    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:07:42.889412    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.946034    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:43.080244    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:07:43.111350    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:07:43.145228    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:07:43.176328    6296 provision.go:87] duration metric: took 509.312ms to configureAuth
	I1217 02:07:43.176328    6296 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:07:43.176328    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:43.180705    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.236378    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.237514    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.237514    6296 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:07:43.404492    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:07:43.404492    6296 ubuntu.go:71] root file system type: overlay
	I1217 02:07:43.405056    6296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:07:43.408624    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.465282    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.465408    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.465408    6296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:07:43.658319    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:07:43.662395    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.719191    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.719552    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.719552    6296 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:07:43.890999    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:43.890999    6296 machine.go:97] duration metric: took 4.9648419s to provisionDockerMachine
	I1217 02:07:43.890999    6296 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 02:07:43.890999    6296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:07:43.895385    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:07:43.899109    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.952181    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.085157    6296 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:07:44.092998    6296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:07:44.093086    6296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:07:44.093086    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:07:44.093465    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:07:44.094379    6296 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:07:44.099969    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:07:44.115031    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:07:44.146317    6296 start.go:296] duration metric: took 255.2637ms for postStartSetup
	I1217 02:07:44.150381    6296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:07:44.153098    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.206142    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.337637    6296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:07:44.346313    6296 fix.go:56] duration metric: took 6.1489614s for fixHost
	I1217 02:07:44.346313    6296 start.go:83] releasing machines lock for "newest-cni-383500", held for 6.1489614s
	I1217 02:07:44.350643    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:44.409164    6296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:07:44.413957    6296 ssh_runner.go:195] Run: cat /version.json
	I1217 02:07:44.414540    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.416694    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.466739    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.469418    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 02:07:44.591848    6296 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:07:44.598090    6296 ssh_runner.go:195] Run: systemctl --version
	I1217 02:07:44.614283    6296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:07:44.624324    6296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:07:44.628955    6296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:07:44.642200    6296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:07:44.642243    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:44.642333    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:44.642453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:44.671216    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:07:44.689408    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:07:44.702919    6296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:07:44.707856    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:07:44.727869    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.747180    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1217 02:07:44.751020    6296 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:07:44.751020    6296 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:07:44.766866    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.786853    6296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:07:44.806986    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:07:44.828346    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:07:44.848400    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:07:44.870349    6296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:07:44.887217    6296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:07:44.905216    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.047629    6296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:07:45.203749    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:45.203842    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:45.209421    6296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:07:45.236823    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.259331    6296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:07:45.337368    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.361492    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:07:45.381383    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:45.409600    6296 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:07:45.421762    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:07:45.435668    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:07:45.461708    6296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:07:45.616228    6296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:07:45.751670    6296 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:07:45.751670    6296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:07:45.778504    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:07:45.800985    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.956342    6296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:07:46.816501    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:07:46.840410    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:07:46.865817    6296 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:07:46.890943    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:46.914319    6296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:07:47.058242    6296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:07:47.214522    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.355565    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	W1217 02:07:47.472644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:47.382801    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:07:47.407455    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.558893    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:07:47.666138    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:47.686246    6296 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:07:47.690618    6296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:07:47.697013    6296 start.go:564] Will wait 60s for crictl version
	I1217 02:07:47.702316    6296 ssh_runner.go:195] Run: which crictl
	I1217 02:07:47.713878    6296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:07:47.755301    6296 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:07:47.758809    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.803772    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.845573    6296 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:07:47.849368    6296 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 02:07:47.978778    6296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:07:47.983162    6296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:07:47.993198    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.011887    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:48.072090    6296 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:07:48.073820    6296 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:07:48.073820    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:48.077080    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.110342    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.110411    6296 docker.go:621] Images already preloaded, skipping extraction
	I1217 02:07:48.113821    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.144461    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.144530    6296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:07:48.144530    6296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:07:48.144779    6296 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:07:48.149102    6296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:07:48.225894    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:48.225894    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:48.225894    6296 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:07:48.225894    6296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:07:48.226504    6296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:07:48.230913    6296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:07:48.243749    6296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:07:48.248634    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:07:48.262382    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:07:48.284386    6296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:07:48.306623    6296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 02:07:48.332101    6296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:07:48.341865    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.360919    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:48.498620    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:48.520308    6296 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 02:07:48.520346    6296 certs.go:195] generating shared ca certs ...
	I1217 02:07:48.520390    6296 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:48.520420    6296 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:07:48.521152    6296 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:07:48.521359    6296 certs.go:257] generating profile certs ...
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 02:07:48.522472    6296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 02:07:48.523217    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:07:48.523515    6296 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:07:48.523598    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:07:48.523888    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:07:48.524140    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:07:48.524399    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:07:48.525045    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:07:48.526649    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:07:48.558725    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:07:48.590333    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:07:48.621493    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:07:48.650907    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:07:48.678948    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:07:48.708871    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:07:48.738822    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:07:48.769873    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:07:48.801411    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:07:48.828208    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:07:48.859551    6296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:07:48.888197    6296 ssh_runner.go:195] Run: openssl version
	I1217 02:07:48.903194    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.920018    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:07:48.936734    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.943690    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.948571    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.997651    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:07:49.015514    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.035513    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:07:49.056511    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.065394    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.070742    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.117805    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:07:49.140198    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.156992    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:07:49.175485    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.184194    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.187479    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.237543    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:07:49.254809    6296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:07:49.269508    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:07:49.317073    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:07:49.365797    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:07:49.413853    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:07:49.462871    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:07:49.515512    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:07:49.558666    6296 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:49.563317    6296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:07:49.602899    6296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:07:49.616365    6296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:07:49.616365    6296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:07:49.622022    6296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:07:49.637152    6296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:07:49.641090    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.693295    6296 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.693843    6296 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-383500" cluster setting kubeconfig missing "newest-cni-383500" context setting]
	I1217 02:07:49.694722    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.716755    6296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:07:49.731850    6296 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:07:49.731850    6296 kubeadm.go:602] duration metric: took 115.4836ms to restartPrimaryControlPlane
	I1217 02:07:49.731850    6296 kubeadm.go:403] duration metric: took 173.1816ms to StartCluster
	I1217 02:07:49.731850    6296 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.731850    6296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.732839    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.734654    6296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:07:49.734654    6296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:07:49.734654    6296 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:70] Setting dashboard=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:49.734654    6296 addons.go:70] Setting default-storageclass=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.734654    6296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon dashboard=true in "newest-cni-383500"
	W1217 02:07:49.734654    6296 addons.go:248] addon dashboard should already be in state true
	I1217 02:07:49.735179    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.739634    6296 out.go:179] * Verifying Kubernetes components...
	I1217 02:07:49.743427    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.745812    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:49.809135    6296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:07:49.809532    6296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:07:49.812989    6296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:49.812989    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:07:49.816981    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.817010    6296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:07:49.818467    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:07:49.818467    6296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:07:49.823270    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.824987    6296 addons.go:239] Setting addon default-storageclass=true in "newest-cni-383500"
	I1217 02:07:49.825100    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.836645    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.889991    6296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:49.889991    6296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:07:49.892991    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.925992    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:49.943010    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.950996    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:50.005058    6296 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:07:50.009064    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.011068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.014077    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:07:50.014077    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:07:50.034057    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:07:50.034057    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:07:50.102553    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:07:50.102611    6296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:07:50.106900    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.124027    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:07:50.124027    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:07:50.189590    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:07:50.189677    6296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1217 02:07:50.190082    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.190082    6296 retry.go:31] will retry after 343.200838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.212250    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:07:50.212311    6296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:07:50.231619    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:07:50.231619    6296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:07:50.241078    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.241078    6296 retry.go:31] will retry after 338.608253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.254747    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:07:50.254794    6296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:07:50.277655    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:50.277655    6296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:07:50.303268    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.381205    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.381205    6296 retry.go:31] will retry after 204.689537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.510673    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.538343    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.585518    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.590250    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.625635    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.625793    6296 retry.go:31] will retry after 198.686568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.703247    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.703247    6296 retry.go:31] will retry after 199.792365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.713669    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.714671    6296 retry.go:31] will retry after 441.125735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.831068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.910787    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:50.921027    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.921027    6296 retry.go:31] will retry after 637.088373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.993148    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.993148    6296 retry.go:31] will retry after 819.774881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.009768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.161082    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:51.282295    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.282369    6296 retry.go:31] will retry after 677.278565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.510844    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.563702    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:51.642986    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.642986    6296 retry.go:31] will retry after 1.231128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.817677    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:51.902470    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.902470    6296 retry.go:31] will retry after 1.160161898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.964724    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:52.009393    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:52.053520    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.053520    6296 retry.go:31] will retry after 497.775491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.510530    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:52.556698    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:52.641425    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.641425    6296 retry.go:31] will retry after 893.419079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.880811    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:52.961643    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.961643    6296 retry.go:31] will retry after 1.354718896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.009905    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.068292    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:53.159843    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.159885    6296 retry.go:31] will retry after 830.811591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.510300    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.539679    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:53.634195    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.634195    6296 retry.go:31] will retry after 1.875797166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.997012    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:54.010116    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:54.085004    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.085004    6296 retry.go:31] will retry after 2.403477641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.321510    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:54.401677    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.401677    6296 retry.go:31] will retry after 2.197762331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.509750    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.011577    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.509949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.514301    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:55.590724    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:55.590724    6296 retry.go:31] will retry after 3.771224323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.010995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:56.493760    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:56.509755    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:56.580067    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.580067    6296 retry.go:31] will retry after 2.862008002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.606008    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:56.692846    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.693375    6296 retry.go:31] will retry after 3.419223727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:57.009866    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:57.510945    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:57.510327    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.010333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.511391    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.013796    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.367655    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:59.447582    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:59.457416    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.457416    6296 retry.go:31] will retry after 6.254269418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.510215    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:59.536524    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.536524    6296 retry.go:31] will retry after 4.240139996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.010517    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.118263    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:00.197472    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.197472    6296 retry.go:31] will retry after 5.486941273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.511349    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.012031    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.510877    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.510995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.511479    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.781390    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:03.867561    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:03.867561    6296 retry.go:31] will retry after 5.255488401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:04.011296    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:04.510695    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.011055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.510174    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.690069    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:05.718147    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:05.792389    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.792389    6296 retry.go:31] will retry after 3.294946391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:05.802187    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.802187    6296 retry.go:31] will retry after 6.599881974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:06.010721    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.509941    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.010092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:07.543861    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:07.511303    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.011059    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.511015    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.009909    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.092821    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:09.127423    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:09.180638    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.180716    6296 retry.go:31] will retry after 13.056189647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:09.211988    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.212069    6296 retry.go:31] will retry after 13.872512266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.510829    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.010907    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.513112    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.010572    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.509543    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.010570    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.409071    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:12.497495    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.497495    6296 retry.go:31] will retry after 9.788092681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.510004    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.011338    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.509984    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.010499    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.511126    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.010949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.511741    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.011278    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.511157    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.010863    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:17.577088    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:17.511273    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.010782    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.510594    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.011193    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.512050    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.011700    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.511001    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.010461    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.510457    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.011002    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.242227    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:22.290434    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:22.384800    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.384884    6296 retry.go:31] will retry after 11.75975207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:22.424758    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.424758    6296 retry.go:31] will retry after 15.557196078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.510556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.011645    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.090496    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:23.176544    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.176625    6296 retry.go:31] will retry after 13.26458747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.510872    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.011245    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.511483    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.011656    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.510967    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.012125    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.512672    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.011155    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:27.612061    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:27.512368    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.010889    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.511767    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.011035    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.512111    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.010919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.510464    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.010433    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.511392    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.010680    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.510963    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.011818    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.511638    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.011591    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.151810    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:34.242474    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.242474    6296 retry.go:31] will retry after 23.644538854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.513602    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.011269    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.511142    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.011267    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.446774    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:08:36.511283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:36.541778    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:36.541860    6296 retry.go:31] will retry after 14.024805043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:37.010743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:37.653192    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:37.510520    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.987959    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:08:38.011587    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:38.113276    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.113276    6296 retry.go:31] will retry after 20.609884455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.511817    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.012624    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.511353    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.011079    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.511636    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.011582    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.512671    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.011503    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.511640    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.011054    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.510485    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.011395    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.511333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.011435    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.513316    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.012600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.512307    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.012227    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.512888    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.011996    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.511276    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.011053    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.511776    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.011678    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:50.050889    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.050889    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.055201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:50.085770    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.085770    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.090316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:50.123762    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.123762    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.127529    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:50.157626    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.157626    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.163652    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:50.189945    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.189945    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.193620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:50.222819    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.222866    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.227818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:50.256909    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.256909    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.260970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:50.290387    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.290387    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.290387    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:50.290387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.357876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.357876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.420098    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.420098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.460376    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.460376    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.542989    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.542989    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:50.542989    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:50.570331    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:50.645772    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:50.645772    6296 retry.go:31] will retry after 16.344343138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:47.695483    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:53.075519    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.098924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:53.131675    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.131675    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.135542    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:53.166511    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.166511    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.170265    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:53.198547    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.198547    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.202694    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:53.232459    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.232459    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.235758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:53.263802    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.263802    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.268318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:53.296956    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.296956    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.301349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:53.331331    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.331331    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.335255    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:53.367520    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.367550    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.367577    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.367602    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.453750    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.453837    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:53.453887    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:53.485058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.485058    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.540050    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.540050    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.604101    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.604101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.146858    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.172227    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:56.203897    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.203941    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.207562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:56.236114    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.236114    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.240341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:56.274958    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.274958    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.280577    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:56.308906    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.308906    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.312811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:56.340777    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.340836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.343843    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:56.371408    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.371441    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.374771    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:56.406487    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.406487    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.410973    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:56.441247    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.441247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.441247    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.441247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.506877    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.506877    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.548841    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.548841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.633101    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.633101    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:56.633101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:56.659421    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.659457    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:57.892877    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:57.970838    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:57.970838    6296 retry.go:31] will retry after 27.385193451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.728649    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:58.834139    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.834680    6296 retry.go:31] will retry after 32.13321777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:59.213728    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.238361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:59.266298    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.266298    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.270295    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:59.299414    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.299414    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.302581    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:59.335627    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.335627    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.339238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:59.367042    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.367042    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.371258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:59.401507    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.401507    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.405468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:59.436657    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.436657    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.440955    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:59.471027    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.471027    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.474047    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:59.505164    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.505164    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.505164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:59.505164    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:59.533835    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.533835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.586695    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.587671    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.648841    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.648841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.688691    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.688691    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.777044    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.282707    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.307570    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:02.340326    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.340412    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.343993    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:02.374035    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.374079    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.377688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1217 02:08:57.736771    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:02.409724    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.409724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.414154    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:02.442993    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.442993    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.447591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:02.474966    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.474966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.479447    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:02.511675    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.511675    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.515939    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:02.544034    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.544034    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.548633    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:02.578196    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.578196    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.578196    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.578196    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.642449    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.643420    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:02.681562    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.681562    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.766017    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.766017    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:02.766017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:02.795166    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.795166    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.347132    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.372840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:05.424611    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.424686    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.428337    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:05.461682    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.461682    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.465790    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:05.495395    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.495395    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.499215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:05.528620    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.528620    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.532226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:05.560375    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.560375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.564119    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:05.595214    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.595214    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.600088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:05.633183    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.633183    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.636776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:05.664840    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.664840    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.664840    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.664840    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.718503    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.718503    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.781489    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.781489    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.821081    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.821081    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.905451    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.905451    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:05.905451    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:06.996471    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:09:07.077056    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:07.077056    6296 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:08.443326    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.470285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:08.499191    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.499191    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.503346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:08.531727    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.531727    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.535874    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:08.567724    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.567724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.571504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:08.601814    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.601814    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.605003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:08.638738    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.638815    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.642116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:08.672949    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.672949    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.676953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:08.706081    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.706145    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.709298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:08.737856    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.737856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.737856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.737856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.798236    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.798236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.838053    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.838053    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.925271    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.925271    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:08.925271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:08.952860    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.952934    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.505032    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.532273    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:11.560855    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.560907    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.564808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:11.595967    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.596024    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.599911    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:11.628443    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.628443    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.632103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:11.659899    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.659899    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.663896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:11.695830    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.695864    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.699333    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:11.728245    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.728314    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.731834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:11.762004    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.762038    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.765497    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:11.800437    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.800437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.800437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.800437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.850659    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.850659    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:11.927328    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.927328    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.968115    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.968115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:12.061366    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:12.061366    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:12.061366    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:07.775163    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:14.593463    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.619698    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:14.649625    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.649625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.653809    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:14.682807    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.682865    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.686225    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:14.716867    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.716867    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.720947    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:14.748712    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.748712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.753598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:14.786467    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.786467    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.790745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:14.820388    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.820388    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.824364    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:14.856683    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.856715    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.860387    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:14.907334    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.907388    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.907388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.907388    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.970536    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.971543    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:15.009837    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:15.009837    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:15.100833    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:15.100833    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:15.100833    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:15.129774    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:15.129838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.687506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.711884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:17.740676    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.740676    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.743807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:17.775526    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.775598    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.779196    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:17.810564    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.810564    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.815366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:17.847149    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.847149    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.850304    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:17.880825    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.880825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.884416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:17.913663    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.913663    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.917519    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:17.949675    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.949736    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.953399    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:17.981777    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.981777    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.981853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.981853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:18.045143    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:18.045143    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:18.085682    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:18.085682    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:18.174824    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:18.174862    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:18.174890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:18.201721    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:18.201721    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:20.754573    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.779418    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:20.815289    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.815336    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.821329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:20.849494    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.849566    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.853416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:20.886139    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.886213    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.890864    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:20.921623    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.921691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.925413    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:20.955001    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.955030    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.959115    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:20.986446    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.986446    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.990622    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:21.019381    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.019903    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:21.023386    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:21.049708    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.049708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:21.049708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:21.049708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:21.114512    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:21.114512    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:21.154312    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:21.154312    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:21.241835    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:21.241835    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:21.241835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:21.269935    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:21.269935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:17.811223    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:23.827385    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.851293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:23.884017    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.884017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.887852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:23.920819    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.920819    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.925124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:23.953397    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.953468    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.957090    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:23.987965    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.987965    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.992238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:24.021188    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.021188    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:24.027472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:24.059066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.059066    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:24.062927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:24.092066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.092066    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:24.096083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:24.130020    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.130083    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:24.130083    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:24.130083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:24.193264    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:24.193264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:24.233590    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:24.233590    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:24.334738    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:24.334738    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:24.334738    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:24.361711    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:24.361711    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:25.361736    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:09:25.443830    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:25.443830    6296 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:26.915928    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.940552    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:26.972265    6296 logs.go:282] 0 containers: []
	W1217 02:09:26.972334    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.975468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:27.004131    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.004131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:27.007688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:27.040755    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.040755    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:27.044298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:27.075607    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.075607    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:27.079764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:27.109726    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.109726    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:27.113807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:27.142060    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.142060    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:27.145049    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:27.179827    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.179898    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:27.183340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:27.212340    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.212340    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:27.212340    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:27.212340    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:27.290453    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:27.290453    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:27.290453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:27.317919    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:27.317919    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:27.372636    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:27.372636    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:27.434881    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:27.434881    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.980965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:30.007081    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:30.038766    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.038766    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:30.042837    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:30.074216    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.074277    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:30.077495    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:30.109815    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.109815    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:30.113543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:30.144692    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.144692    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:30.148595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:30.181530    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.181530    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:30.185056    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:30.230054    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.230054    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:30.233965    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:30.264421    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.264421    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:30.268191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:30.302463    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.302463    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:30.302463    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:30.302463    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:30.369905    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:30.369905    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:30.407364    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:30.407364    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:30.501045    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:30.501045    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:30.501045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:30.529058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:30.529119    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:30.973740    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:09:31.053832    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:31.053832    6296 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:31.057712    6296 out.go:179] * Enabled addons: 
	I1217 02:09:31.060716    6296 addons.go:530] duration metric: took 1m41.3245326s for enable addons: enabled=[]
	W1217 02:09:27.847902    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:33.093000    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:33.117479    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:33.148299    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.148299    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:33.152403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:33.180747    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.180747    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:33.184258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:33.214319    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.214389    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:33.217921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:33.244463    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.244463    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:33.248882    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:33.280520    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.280573    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:33.284251    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:33.313836    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.313883    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:33.318949    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:33.351545    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.351545    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:33.355242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:33.384638    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.384638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:33.384638    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:33.384638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:33.438624    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:33.438624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:33.503148    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:33.504145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:33.542770    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:33.542770    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:33.628872    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:33.628872    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:33.628872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:36.163766    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:36.190660    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:36.219485    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.219485    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:36.223169    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:36.253826    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.253826    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:36.257584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:36.289684    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.289684    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:36.293455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:36.321228    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.321228    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:36.326076    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:36.355893    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.355893    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:36.360432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:36.392307    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.392359    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:36.395377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:36.427797    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.427797    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:36.431432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:36.465462    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.465547    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:36.465590    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:36.465605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:36.515585    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:36.515688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:36.577828    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:36.577828    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:36.617923    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:36.617923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:36.706865    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:36.706865    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:36.706865    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.240583    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:39.269426    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:39.300548    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.300548    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:39.304455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:39.337640    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.337640    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:39.341427    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:39.375280    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.375280    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:39.379328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:39.408206    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.408291    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:39.413138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:39.439760    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.439760    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:39.443728    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:39.470865    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.471120    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:39.477630    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:39.510101    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.510101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:39.515759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:39.545423    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.545494    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:39.545494    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:39.545559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.574474    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:39.574474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:39.627410    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:39.627410    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:39.687852    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:39.687852    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:39.730823    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:39.730823    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:39.820771    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.326489    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:42.349989    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:42.381673    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.381673    6296 logs.go:284] No container was found matching "kube-apiserver"
	W1217 02:09:37.889672    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:42.385392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:42.414575    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.414575    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:42.418510    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:42.452120    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.452120    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:42.456157    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:42.484625    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.484625    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:42.487782    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:42.520235    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.520235    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:42.525546    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:42.558589    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.558589    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:42.561770    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:42.592364    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.592364    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:42.596368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:42.625522    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.625522    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:42.625522    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:42.625522    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:42.661616    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:42.661616    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:42.748046    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.748046    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:42.748046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:42.778854    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:42.778854    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:42.827860    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:42.827860    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.394220    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:45.418501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:45.453084    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.453132    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:45.457433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:45.491679    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.491679    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:45.495517    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:45.524934    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.524934    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:45.528788    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:45.559787    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.559837    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:45.563714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:45.608019    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.608104    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:45.612132    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:45.639869    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.639869    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:45.644002    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:45.671767    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.671767    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:45.675466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:45.704056    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.704104    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:45.704104    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:45.704104    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.766557    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:45.766557    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:45.807449    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:45.807449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:45.898686    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:45.898686    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:45.898686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:45.924614    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:45.924614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.482563    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:48.510137    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:48.546063    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.546063    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:48.551905    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:48.588536    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.588617    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:48.592628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:48.621540    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.621540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:48.625701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:48.653505    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.653505    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:48.659485    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:48.688940    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.689008    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:48.692649    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:48.718858    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.718858    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:48.722907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:48.752451    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.752451    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:48.755913    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:48.785865    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.785903    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:48.785903    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:48.785948    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.842730    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:48.843261    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:48.905352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:48.905352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:48.945271    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:48.945271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.027913    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.027963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:49.027963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:51.563182    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:51.587223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:51.619597    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.619621    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:51.623355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:51.652069    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.652152    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:51.655716    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:51.684602    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.684653    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:51.687735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:51.716327    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.716327    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:51.720054    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:51.750202    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.750266    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:51.753821    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:51.781863    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.781863    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:51.785648    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:51.814791    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.814841    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:51.818565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:51.850654    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.850654    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:51.850654    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:51.850654    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:51.912429    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:51.912429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:51.951795    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:51.951795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.035486    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.035486    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:52.035486    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:52.063472    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.063472    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:47.930106    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:54.631678    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:54.657392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:54.689037    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.689037    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:54.692460    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:54.723231    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.723231    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:54.729158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:54.759168    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.759168    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:54.762883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:54.792371    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.792371    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:54.796165    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:54.828375    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.828375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:54.832201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:54.862409    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.862476    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:54.866107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:54.897161    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.897161    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:54.900834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:54.947452    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.947452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:54.947452    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:54.947452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.016411    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.016411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.055628    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.055628    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.152557    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.152599    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:55.152599    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:55.180492    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.180492    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:57.741989    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:57.768328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:57.799200    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.799200    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:57.803065    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:57.832042    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.832042    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:57.835921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:57.863829    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.863891    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:57.867347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:57.896797    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.896822    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:57.900369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:57.929832    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.929907    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:57.933326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:57.960278    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.960278    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:57.964215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:57.992277    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.992324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:57.995951    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:58.026155    6296 logs.go:282] 0 containers: []
	W1217 02:09:58.026254    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.026254    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.026303    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.091999    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.091999    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.131520    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.131520    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.226831    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.226831    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:58.226831    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:58.256592    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.256635    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:00.809919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:00.842222    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:00.872955    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.872955    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:00.876666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:00.906031    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.906031    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:00.909593    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:00.939873    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.939946    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:00.943346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:00.972609    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.972643    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:00.975886    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:01.005269    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.005269    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.009766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:01.041677    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.041677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.048361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:01.081235    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.081312    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.084849    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:01.113437    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.113437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.113437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.113437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.160067    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.160624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.225071    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.225071    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.265307    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.265307    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.348506    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.348535    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:01.348571    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:57.967423    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:03.891628    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:03.925404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:03.965688    6296 logs.go:282] 0 containers: []
	W1217 02:10:03.965688    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:03.968982    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:04.006348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.006348    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.009769    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:04.039968    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.039968    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.044404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:04.078472    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.078472    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.081894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:04.113348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.113348    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.117138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:04.148885    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.148885    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.152756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:04.181559    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.181616    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.185351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:04.217017    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.217017    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.217017    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.217017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.284540    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.284540    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.324402    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.324402    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.409943    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:04.409943    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:04.409943    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:04.438771    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.438771    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:06.997897    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.024185    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:07.054915    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.055512    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.060167    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:07.089778    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.089778    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.093773    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:07.124641    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.124641    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.128016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:07.154834    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.154915    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.158505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:07.188568    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.188568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.192962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:07.225078    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.225078    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.228699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:07.258599    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.258659    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.262590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:07.291623    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.291623    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.291623    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:07.291623    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:07.322611    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.322611    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.374970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.374970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.438795    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.438795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:07.479442    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.479442    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.566162    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.072312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.096505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:10.125617    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.125617    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.129377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:10.157921    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.157921    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.161850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:10.191705    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.191705    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.196003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:10.224412    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.224482    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.229368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:10.258140    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.258140    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.261205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:10.292047    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.292047    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.296511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:10.325818    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.325818    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.329752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:10.359454    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.359530    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.359530    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.359530    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:10.413970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.413970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.476665    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.476665    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.516335    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.516335    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.602353    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.602353    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:10.602353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:10:08.007712    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:13.134148    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:13.159720    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:13.191534    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.191534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.195626    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:13.230035    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.230035    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.233817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:13.266476    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.266476    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.270598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:13.305852    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.305852    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.310349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:13.341805    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.341867    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.345346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:13.377945    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.377945    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.381659    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:13.411885    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.411957    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.416039    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:13.446642    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.446642    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.446642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.446642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.487083    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.487083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.574632    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.574632    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:13.574632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.604181    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.604702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:13.660020    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.660020    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.225038    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:16.248922    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:16.280247    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.280247    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:16.284285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:16.312596    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.312596    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:16.316952    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:16.345108    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.345108    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:16.348083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:16.377403    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.377403    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.380619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:16.410555    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.410555    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.414048    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:16.446454    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.446454    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.449405    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:16.478967    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.478967    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.484108    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:16.516422    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.516422    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.516422    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.516422    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.580305    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.580305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.618663    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.618663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.705105    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.705105    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:16.705105    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:16.732046    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.732046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:19.284431    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:19.307909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:19.340842    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.340842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:19.344830    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:19.371150    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.371150    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:19.374863    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:19.403216    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.403216    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:19.406907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:19.433979    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.433979    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:19.438046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:19.469636    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.469636    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:19.473675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:19.504296    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.504296    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.508671    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:19.535932    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.535932    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.539707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:19.567355    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.567416    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.567416    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.567416    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.629876    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.629876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.678547    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.678547    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.785306    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.785306    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:19.785371    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:19.813137    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.813137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.369643    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:10:18.049946    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:22.396731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:22.431018    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.431018    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:22.434688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:22.463307    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.463307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:22.467323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:22.497065    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.497065    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:22.500574    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:22.531497    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.531564    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:22.535088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:22.563706    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.563779    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:22.567344    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:22.602516    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.602597    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:22.606242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:22.637637    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.637699    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:22.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:22.668078    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.668078    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.668078    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.668078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.754963    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.754963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:22.754963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:22.783172    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.783222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.840048    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.840048    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.900137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.900137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.445900    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:25.472646    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:25.502929    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.502929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:25.506274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:25.537721    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.537721    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:25.543044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:25.572924    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.572924    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:25.576391    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:25.607737    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.607798    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:25.611457    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:25.644967    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.645041    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:25.648690    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:25.677801    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.677801    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:25.681530    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:25.709148    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.709148    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:25.715667    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:25.746892    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.746892    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:25.746892    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:25.746892    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:25.796336    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:25.796336    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.862353    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.862353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.902100    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.902100    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.988926    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.988926    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:25.988926    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:28.523475    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:28.549366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:28.580055    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.580055    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:28.583822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:28.615168    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.615168    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:28.618724    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:28.650344    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.650368    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:28.654014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:28.704033    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.704033    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:28.707699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:28.738871    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.738938    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:28.743270    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:28.775432    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.775432    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:28.779176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:28.810234    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.810351    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:28.814357    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:28.845783    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.845783    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:28.845783    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:28.845783    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:28.902626    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:28.902626    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.963758    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.963758    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:29.002141    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:29.002141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:29.104674    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:29.104674    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:29.104674    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:31.640270    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:31.668862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:31.703099    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.703099    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:31.706355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:31.737408    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.737408    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:31.741549    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:31.771462    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.771549    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:31.775645    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:31.803600    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.803600    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:31.807313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:31.835884    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.835884    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:31.840000    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:31.870518    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.870518    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:31.877548    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:31.905387    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.905387    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:31.909722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:31.938258    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.938284    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:31.938284    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:31.938284    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:32.000115    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:32.000115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:32.039351    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:32.039351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:32.128849    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:32.128849    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:32.128849    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:32.155670    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:32.155670    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:28.083644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:34.707099    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:34.732689    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:34.763625    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.763625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:34.767349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:34.797435    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.797435    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:34.801415    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:34.828785    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.828785    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:34.832654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:34.864748    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.864748    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:34.868392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:34.896365    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.896365    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:34.900474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:34.932681    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.932681    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:34.936571    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:34.966056    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.966056    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:34.969208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:34.998362    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.998362    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:34.998362    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:34.998362    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:35.036977    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:35.036977    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:35.134841    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:35.134841    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:35.134841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:35.162429    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:35.162429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:35.213960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:35.214015    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:37.779857    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:37.806799    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:37.840730    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.840730    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:37.846443    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:37.875504    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.875504    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:37.879215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:37.910068    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.910068    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:37.913551    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:37.942897    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.942897    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:37.946741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:37.978321    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.978321    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:37.982267    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:38.008421    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.008421    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:38.013043    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:38.043041    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.043041    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:38.049737    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:38.082117    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.082117    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:38.082117    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:38.082117    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:38.148970    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:38.148970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:38.189697    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:38.189697    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:38.276122    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:38.276122    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:38.276122    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:38.304355    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:38.304355    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:40.862712    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:40.889041    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:40.921169    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.921169    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:40.924297    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:40.956313    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.956356    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:40.960294    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:40.990144    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.990144    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:40.993876    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:41.026732    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.026803    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:41.030745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:41.073825    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.073825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:41.078152    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:41.105859    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.105859    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:41.111714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:41.143286    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.143324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:41.146776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:41.176314    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.176345    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:41.176345    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:41.176345    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:41.213266    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:41.213266    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:41.300305    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:41.300305    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:41.300305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:41.328560    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:41.328621    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:41.375953    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:41.375953    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:38.119927    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:43.941613    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:43.967455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:44.000199    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.000199    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:44.003568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:44.035058    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.035058    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:44.040590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:44.083687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.083687    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:44.087476    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:44.115776    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.115776    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:44.119318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:44.155471    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.155513    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:44.159433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:44.191599    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.191636    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:44.195145    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:44.228181    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.228211    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:44.231971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:44.259687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.259763    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:44.259763    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:44.259763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:44.323705    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:44.323705    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:44.365401    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:44.365401    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:44.453893    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:44.453893    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:44.453893    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:44.480694    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:44.480694    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.042501    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:47.067663    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:47.108433    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.108433    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:47.112206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:47.144336    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.144336    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:47.148449    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:47.182968    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.183049    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:47.186614    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:47.215738    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.215738    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:47.219595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:47.248444    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.248511    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:47.252434    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:47.280975    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.280975    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:47.284966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:47.317178    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.317178    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:47.321223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:47.352638    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.352638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:47.352638    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:47.352638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:47.390049    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:47.390049    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:47.479425    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:47.479425    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:47.479425    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:47.505331    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:47.505331    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.556431    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:47.556431    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.124255    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:50.151100    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:50.184499    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.184565    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:50.187696    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:50.221764    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.221764    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:50.225471    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:50.253823    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.253823    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:50.260470    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:50.289768    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.289815    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:50.295283    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:50.321597    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.321597    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:50.325774    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:50.356707    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.356707    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:50.360685    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:50.390099    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.390099    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:50.393971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:50.420950    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.420950    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:50.420950    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:50.420950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.484730    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:50.484730    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:50.523997    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:50.523997    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:50.618256    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:50.618256    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:50.618256    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:50.645077    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:50.645077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:48.158175    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:53.200622    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:53.223348    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:53.253589    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.253589    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:53.258688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:53.287647    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.287689    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:53.291555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:53.324358    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.324403    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:53.327650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:53.355417    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.355417    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:53.359780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:53.390012    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.390012    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:53.393536    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:53.420636    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.420672    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:53.424429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:53.453665    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.453744    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:53.456764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:53.486769    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.486836    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:53.486875    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:53.486875    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:53.552513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:53.552513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:53.593054    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:53.593054    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:53.683171    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:53.683207    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:53.683230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:53.712513    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:53.712513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:56.288600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:56.314380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:56.347447    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.347447    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:56.351158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:56.381779    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.381779    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:56.385232    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:56.423000    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.423000    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:56.427083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:56.456635    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.456635    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:56.460509    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:56.490868    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.490868    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:56.496594    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:56.523671    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.523671    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:56.527847    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:56.559992    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.559992    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:56.565352    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:56.591708    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.591708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:56.591708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:56.591708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:56.656572    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:56.656572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:56.696334    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:56.696334    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:56.788411    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:56.788411    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:56.788411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:56.815762    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:56.815762    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.370676    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:59.404615    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:59.440735    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.440735    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:59.446758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:59.475209    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.475209    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:59.479521    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:59.509465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.509465    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:59.513228    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:59.542409    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.542409    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:59.546008    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:59.575778    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.575778    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:59.579759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:59.613465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.613465    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:59.617266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:59.645245    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.645245    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:59.649170    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:59.680413    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.680449    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:59.680449    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:59.680449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:59.713987    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:59.713987    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.764930    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:59.764994    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:59.832077    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:59.832077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:59.870681    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:59.870681    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:59.953336    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:10:58.200115    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:11:02.457745    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:02.492666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:02.526665    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.526665    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:02.530862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:02.560353    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.560413    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:02.564099    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:02.595430    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.595430    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:02.599884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:02.629744    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.629744    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:02.633637    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:02.662623    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.662623    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:02.666817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:02.694696    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.694696    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:02.698194    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:02.727384    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.727442    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:02.731483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:02.766114    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.766114    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:02.766114    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:02.766114    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:02.830755    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:02.830755    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:02.870216    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:02.870216    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:02.958327    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.958327    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:02.958380    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:02.984980    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:02.984980    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.540158    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:05.564812    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:05.595638    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.595638    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:05.599748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:05.628748    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.628748    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:05.632878    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:05.666232    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.666257    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:05.670293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:05.699654    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.699654    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:05.703004    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:05.733113    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.733113    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:05.737096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:05.765591    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.765639    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:05.770398    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:05.796360    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.796360    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:05.800240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:05.829847    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.829914    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:05.829914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:05.829945    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.880789    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:05.880789    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:05.943002    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:05.943002    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:05.983389    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:05.983389    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.076023    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.076023    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:06.076023    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:08.608606    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:08.632215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:08.665017    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.665017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:08.669299    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:08.695355    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.695355    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:08.699306    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:08.729054    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.729054    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:08.732454    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:08.759881    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.759881    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:08.764328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:08.793695    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.793777    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:08.797908    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:08.826225    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.826225    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:08.829679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:08.859645    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.859645    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:08.863083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:08.893657    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.893657    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:08.893657    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:08.893657    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:08.958163    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:08.958163    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:08.997418    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:08.997418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.087973    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.087973    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:09.087973    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:09.115687    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.115687    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:11.697770    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.725676    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:11.758809    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.758809    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:11.762929    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:11.794198    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.794198    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:11.798023    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:11.828890    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.828890    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:11.833358    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:11.865217    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.865217    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:11.868915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:11.897672    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.897672    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:11.901235    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:11.931725    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.931808    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:11.935264    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:11.966263    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.966263    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:11.970422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:11.999856    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.999856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:11.999856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:11.999856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.064137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.064137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.102491    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.102491    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.183568    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.183568    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:12.183568    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:12.212178    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.212178    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:11:08.241744    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:16.871278    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 02:11:16.871278    6768 node_ready.go:38] duration metric: took 6m0.0008728s for node "no-preload-184000" to be "Ready" ...
	I1217 02:11:16.874572    6768 out.go:203] 
	W1217 02:11:16.876457    6768 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:11:16.876457    6768 out.go:285] * 
	W1217 02:11:16.879042    6768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:11:16.881673    6768 out.go:203] 
	I1217 02:11:14.772821    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.797656    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:14.826900    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.826900    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.829894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:14.859202    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.859202    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.862783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:14.891414    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.891414    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.895052    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:14.925404    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.925404    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:14.928966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:14.959295    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.959330    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:14.962893    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:14.991696    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.991730    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:14.994776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:15.025468    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.025468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.031674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:15.060661    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.060661    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.060733    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.060733    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.120513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.120513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.159608    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.159608    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.244418    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.244418    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:15.244418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:15.271288    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.271288    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.830556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.850600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:17.886696    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.886696    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.890674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:17.921702    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.921702    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.924697    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:17.952692    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.952692    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.956701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:17.984691    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.984691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.988655    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:18.024626    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.024663    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:18.028558    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:18.060310    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.060310    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.064024    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:18.100124    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.100124    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.104105    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:18.141223    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.141223    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.141223    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.141223    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.179686    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.179686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.311240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.311240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:18.311240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:18.342566    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.342615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.393872    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.393872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.977693    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:21.006733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:21.035136    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.035201    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:21.039202    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:21.069636    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.069636    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:21.075448    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:21.105437    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.105437    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:21.108735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:21.136602    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.136602    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:21.140124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:21.168674    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.168674    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:21.172368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:21.204723    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.204723    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:21.208123    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:21.237130    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.237130    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:21.240654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:21.268170    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.268170    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:21.268170    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:21.268170    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:21.333642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:21.333642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:21.372230    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:21.372230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:21.467012    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:21.467012    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:21.467012    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:21.495867    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:21.495867    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.053568    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:24.079587    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:24.110362    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.110399    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:24.113326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:24.141818    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.141818    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:24.145313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:24.172031    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.172031    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:24.176197    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:24.205114    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.205133    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:24.208437    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:24.238244    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.238244    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:24.242692    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:24.271687    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.271687    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:24.276384    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:24.307922    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.307922    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:24.311538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:24.350108    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.350108    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:24.350108    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:24.350108    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.402159    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:24.402224    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:24.463824    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:24.463824    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:24.503645    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:24.503645    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:24.591969    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:24.591969    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:24.591969    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.123965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:27.157839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:27.199991    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.199991    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:27.204206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:27.231981    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.231981    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:27.235568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:27.265668    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.265668    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:27.269162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:27.299488    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.299488    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:27.303277    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:27.335769    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.335769    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:27.339516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:27.369112    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.369112    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:27.372881    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:27.402031    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.402031    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:27.405780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:27.436610    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.436610    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:27.436610    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:27.436610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:27.523394    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:27.523917    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:27.523957    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.552476    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:27.552476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:27.607026    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:27.607078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:27.670834    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:27.670834    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.216027    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:30.241711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:30.272275    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.272275    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:30.276071    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:30.304635    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.304635    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:30.307639    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:30.340374    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.340374    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:30.343758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:30.374162    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.374162    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:30.378010    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:30.407836    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.407836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:30.411411    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:30.440002    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.440002    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:30.443429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:30.472647    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.472647    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:30.476538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:30.510744    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.510744    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:30.510744    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:30.510744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:30.575069    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:30.575156    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:30.639732    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:30.640731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.685195    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:30.685195    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:30.775246    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:30.775295    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:30.775295    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.308109    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:33.334329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:33.365061    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.365061    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:33.370854    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:33.399488    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.399488    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:33.406335    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:33.436434    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.436434    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:33.439783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:33.468947    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.468947    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:33.474014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:33.502568    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.502568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:33.506146    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:33.535706    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.535706    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:33.540016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:33.573811    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.573811    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:33.577712    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:33.606321    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.606321    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:33.606321    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:33.606321    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:33.671884    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:33.671884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:33.712095    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:33.712095    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:33.800767    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:33.800848    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:33.800884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.829402    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:33.829474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:36.410236    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:36.438912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:36.468229    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.468229    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:36.472231    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:36.501220    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.501220    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:36.506462    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:36.539556    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.539556    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:36.543603    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:36.584367    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.584367    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:36.588513    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:36.620670    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.620670    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:36.626030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:36.654239    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.654239    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:36.658962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:36.689023    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.689023    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:36.693754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:36.721351    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.721351    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:36.721351    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:36.721351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:36.787832    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:36.787832    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:36.828019    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:36.828019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:36.916923    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:36.916923    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:36.916923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:36.946231    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:36.946265    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.498459    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:39.522909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:39.553462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.553462    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:39.557524    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:39.585462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.585462    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:39.591342    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:39.619332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.619399    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:39.623096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:39.651071    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.651071    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:39.654766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:39.683502    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.683502    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:39.687390    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:39.715332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.715332    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:39.718932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:39.749019    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.749019    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:39.752739    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:39.783378    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.783378    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:39.783378    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:39.783378    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.835019    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:39.835019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:39.899542    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:39.899542    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:39.938717    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:39.938717    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:40.026359    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:40.026403    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:40.026446    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:42.561805    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:42.585507    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:42.613091    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.613091    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:42.616991    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:42.647608    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.647608    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:42.651380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:42.680540    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.680540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:42.683625    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:42.717014    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.717014    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:42.721369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:42.750017    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.750017    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:42.753961    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:42.785164    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.785164    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:42.788883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:42.817424    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.817424    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:42.821266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:42.853247    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.853247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:42.853247    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:42.853247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:42.910034    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:42.910052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:42.970436    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:42.970436    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:43.009833    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:43.010830    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:43.102803    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:43.102803    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:43.102803    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.636418    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:45.661677    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:45.695141    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.695141    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:45.699189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:45.729376    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.729376    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:45.733753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:45.764365    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.764365    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:45.767917    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:45.799287    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.799287    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:45.802968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:45.835270    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.835270    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:45.838766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:45.868660    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.868660    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:45.875727    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:45.903566    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.903566    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:45.907562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:45.937452    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.937452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:45.937452    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:45.937452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.965091    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:45.965091    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:46.013173    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:46.013173    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:46.077113    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:46.077113    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:46.118527    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:46.118527    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:46.207662    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:48.714055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:48.741412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:48.772767    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.772767    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:48.776092    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:48.804946    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.805020    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:48.808538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:48.837488    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.837488    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:48.840453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:48.871139    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.871139    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:48.875518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:48.904264    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.904264    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:48.911351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:48.939118    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.939118    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:48.943340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:48.970934    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.970934    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:48.974990    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:49.005140    6296 logs.go:282] 0 containers: []
	W1217 02:11:49.005174    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:49.005205    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:49.005234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:49.075925    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:49.075925    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:49.116144    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:49.116144    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:49.196968    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:49.197074    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:49.197074    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:49.222883    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:49.223404    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:51.783312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:51.809151    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:51.839751    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.839751    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:51.844016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:51.895178    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.895178    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:51.899341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:51.930311    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.930311    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:51.933797    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:51.961857    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.961857    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:51.966036    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:51.993647    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.993647    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:51.997672    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:52.026485    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.026485    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:52.032726    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:52.062039    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.062039    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:52.066379    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:52.096772    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.096772    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:52.096772    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:52.096772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:52.163369    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:52.163369    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:52.203719    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:52.203719    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:52.295324    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:52.295324    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:52.295324    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:52.323234    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:52.323234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:54.878824    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:54.907441    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:54.944864    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.944864    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:54.948030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:54.980769    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.980769    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:54.987506    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:55.019726    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.019726    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:55.024226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:55.052618    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.052618    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:55.056658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:55.085528    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.085607    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:55.089212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:55.120453    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.120525    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:55.124591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:55.154725    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.154749    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:55.157707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:55.187692    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.187692    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:55.187692    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:55.187692    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:55.252848    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:55.252848    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:55.318197    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:55.318197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:55.358145    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:55.358145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:55.439213    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:55.439213    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:55.439744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:57.972346    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:57.997412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:58.029794    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.029794    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:58.033582    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:58.064729    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.064729    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:58.068722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:58.103854    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.103854    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:58.107069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:58.140767    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.140767    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:58.145080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:58.172792    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.172792    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:58.177038    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:58.205809    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.205809    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:58.209371    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:58.236353    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.236353    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:58.240621    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:58.269469    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.269469    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:58.269469    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:58.269469    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:58.324960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:58.324960    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:58.384708    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:58.384708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:58.423476    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:58.423476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:58.512328    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:58.512387    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:58.512387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.044354    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:01.073699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:01.104765    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.104836    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:01.107915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:01.141131    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.141131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:01.145209    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:01.174536    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.174536    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:01.178187    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:01.209172    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.209172    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:01.212803    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:01.241435    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.241486    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:01.245545    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:01.277115    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.277115    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:01.281366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:01.312158    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.312158    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:01.316725    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:01.343220    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.343220    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:01.343220    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:01.343220    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:01.382233    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:01.382233    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:01.487570    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:01.488578    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:01.488578    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.514572    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:01.514572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:01.567754    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:01.567754    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.140604    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:04.165376    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:04.197379    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.197379    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:04.202896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:04.231436    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.231506    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:04.235354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:04.267960    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.267960    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:04.271789    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:04.301108    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.301108    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:04.305219    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:04.334515    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.334515    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:04.338693    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:04.366071    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.366071    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:04.369958    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:04.398457    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.398457    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:04.405087    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:04.432495    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.432495    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:04.432495    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:04.432495    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.492454    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:04.492454    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:04.530878    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:04.530878    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:04.615739    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:04.615739    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:04.615739    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:04.643270    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:04.643304    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.195429    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:07.221998    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:07.254842    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.254842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:07.258578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:07.291820    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.291820    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:07.297979    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:07.329603    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.329603    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:07.334181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:07.363276    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.363324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:07.367248    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:07.394630    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.394695    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:07.398679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:07.425998    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.425998    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:07.429814    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:07.458824    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.458878    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:07.462682    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:07.490543    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.490614    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:07.490614    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:07.490614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:07.575806    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:07.575806    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:07.576816    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:07.607910    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:07.607910    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.659155    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:07.659155    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:07.722240    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:07.722240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.270711    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:10.295753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:10.324920    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.324920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:10.328903    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:10.358180    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.358218    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:10.362249    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:10.390135    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.390135    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:10.393738    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:10.423058    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.423090    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:10.426534    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:10.456745    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.456745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:10.463439    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:10.493765    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.493765    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:10.497858    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:10.526425    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.526425    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:10.532217    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:10.563338    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.563338    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:10.563338    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:10.563338    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:10.627669    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:10.627669    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.666455    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:10.666455    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:10.755613    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:10.755613    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:10.755613    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:10.786516    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:10.787045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:13.342631    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:13.368870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:13.402304    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.402347    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:13.408012    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:13.436633    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.436710    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:13.439877    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:13.468754    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.469007    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:13.473752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:13.505247    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.505324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:13.509766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:13.538745    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.538745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:13.542743    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:13.571986    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.571986    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:13.575522    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:13.604002    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.604002    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:13.608063    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:13.636028    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.636028    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:13.636028    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:13.636028    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:13.701418    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:13.701418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:13.740729    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:13.740729    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:13.830687    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:13.830746    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:13.830768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:13.856732    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:13.856732    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.415071    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:16.441827    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:16.474920    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.474920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:16.478560    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:16.509149    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.509149    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:16.512927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:16.544114    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.544114    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:16.547867    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:16.578111    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.578111    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:16.581776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:16.610586    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.610586    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:16.614807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:16.644103    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.644103    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:16.647954    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:16.692289    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.692289    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:16.696153    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:16.727229    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.727229    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:16.727229    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:16.727229    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:16.823236    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:16.823236    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:16.823236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:16.849827    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:16.849827    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.905388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:16.905414    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:16.965153    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:16.965153    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.511192    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:19.537347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:19.568920    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.568920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:19.573318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:19.604587    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.604587    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:19.608244    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:19.637707    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.637732    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:19.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:19.669047    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.669047    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:19.672932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:19.703243    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.703243    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:19.706862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:19.738948    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.738948    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:19.742483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:19.773620    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.773620    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:19.777766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:19.807218    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.807218    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:19.807218    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:19.807218    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:19.872750    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:19.872750    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.912835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:19.912835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:19.997398    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:19.997398    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:19.997398    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:20.025629    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:20.025629    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:22.593289    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:22.619754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:22.652929    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.652929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:22.657635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:22.689768    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.689846    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:22.693504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:22.720087    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.720087    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:22.723840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:22.752902    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.752959    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:22.757109    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:22.787369    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.787369    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:22.791584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:22.822117    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.822117    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:22.825675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:22.856022    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.856022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:22.859609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:22.886982    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.886982    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:22.886982    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:22.886982    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:22.972988    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:22.972988    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:22.972988    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:23.002037    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:23.002037    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:23.061548    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:23.061548    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:23.124352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:23.124352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:25.670974    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:25.706279    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:25.741150    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.741150    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:25.745079    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:25.773721    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.773782    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:25.779777    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:25.808516    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.808516    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:25.813011    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:25.844755    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.844755    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:25.848591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:25.877332    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.877332    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:25.881053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:25.907973    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.907973    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:25.914424    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:25.941138    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.941138    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:25.945025    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:25.974760    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.974760    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:25.974760    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:25.974760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:26.012354    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:26.012354    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:26.113177    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:26.113177    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:26.113177    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:26.144162    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:26.144245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:26.194605    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:26.195138    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:28.763811    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:28.789762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:28.820544    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.820544    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:28.824807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:28.855728    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.855728    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:28.860354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:28.894655    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.894655    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:28.898069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:28.928310    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.928394    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:28.932124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:28.967209    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.967209    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:28.973126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:29.002975    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.003024    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:29.006839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:29.044805    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.044881    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:29.049158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:29.078108    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.078142    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:29.078174    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:29.078202    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:29.142751    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:29.142751    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:29.182082    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:29.182082    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:29.271566    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:29.271596    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:29.271643    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:29.299332    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:29.299332    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:31.856743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:31.882741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:31.912323    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.912372    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:31.917046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:31.948587    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.948631    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:31.952095    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:31.981682    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.981682    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:31.985888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:32.022173    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.022173    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:32.026061    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:32.070026    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.070026    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:32.074016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:32.105255    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.105255    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:32.109062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:32.140873    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.140947    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:32.143941    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:32.172848    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.172876    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:32.172876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:32.172876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:32.237207    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:32.237207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:32.275838    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:32.275838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:32.360656    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:32.360656    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:32.360656    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:32.391099    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:32.391099    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:34.970955    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:35.002200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:35.036658    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.036658    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:35.041208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:35.068998    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.068998    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:35.075758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:35.105253    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.105253    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:35.109356    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:35.137411    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.137411    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:35.141289    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:35.168542    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.168542    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:35.174717    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:35.204677    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.204677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:35.209675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:35.240901    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.240901    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:35.244034    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:35.276453    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.276453    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:35.276453    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:35.276453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:35.341158    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:35.341158    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:35.381822    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:35.381822    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:35.472890    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:35.472890    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:35.472890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:35.501374    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:35.501374    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.054644    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:38.080787    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:38.112397    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.112420    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:38.116070    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:38.144341    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.144396    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:38.148080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:38.177159    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.177159    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:38.181253    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:38.210000    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.210000    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:38.215709    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:38.243526    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.243526    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:38.247620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:38.278443    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.278443    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:38.282504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:38.314414    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.314414    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:38.317968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:38.345306    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.345306    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:38.345306    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:38.345412    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:38.425240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:38.425240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:38.425240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:38.455129    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:38.455129    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.514775    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:38.514775    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:38.574255    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:38.574255    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.116537    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:41.139650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:41.169726    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.169814    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:41.173285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:41.204812    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.204812    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:41.208892    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:41.235980    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.235980    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:41.240200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:41.271415    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.271415    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:41.275005    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:41.303967    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.303967    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:41.309707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:41.340401    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.340401    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:41.343688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:41.374008    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.374008    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:41.377325    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:41.409502    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.409563    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:41.409563    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:41.409610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:41.472168    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:41.472168    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.513098    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:41.513098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:41.601716    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:41.601716    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:41.601716    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:41.629092    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:41.629148    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:44.185012    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:44.210566    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:44.242274    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.242274    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:44.248762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:44.280241    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.280307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:44.283818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:44.312929    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.312997    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:44.316643    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:44.343840    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.343840    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:44.347619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:44.378547    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.378547    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:44.382595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:44.410908    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.410908    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:44.414686    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:44.448329    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.448329    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:44.453888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:44.484842    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.484842    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:44.484842    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:44.484842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:44.550740    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:44.550740    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:44.589666    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:44.589666    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:44.677625    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:44.677625    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:44.677625    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:44.706051    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:44.706051    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:47.257477    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:47.286845    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:47.315563    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.315563    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:47.319220    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:47.351319    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.351319    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:47.354946    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:47.382237    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.382237    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:47.386106    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:47.415608    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.415608    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:47.419575    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:47.449212    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.449241    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:47.452978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:47.482356    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.482356    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:47.486511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:47.518156    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.518205    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:47.522254    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:47.550631    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.550631    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:47.550631    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:47.550727    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:47.615950    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:47.615950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:47.655928    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:47.655928    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:47.744126    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:47.744164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:47.744210    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:47.773502    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:47.773502    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.331328    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:50.368555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:50.407443    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.407443    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:50.411798    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:50.440520    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.440544    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:50.444430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:50.478050    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.478050    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:50.481848    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:50.513603    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.513658    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:50.517565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:50.551935    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.552946    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:50.556641    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:50.591171    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.591171    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:50.594981    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:50.624821    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.624821    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:50.628756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:50.661209    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.661209    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:50.661209    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:50.661209    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:50.693141    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:50.693141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.746322    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:50.746322    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:50.805974    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:50.805974    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:50.844572    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:50.844572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:50.935133    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.441690    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:53.466017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:53.494846    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.494846    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:53.499634    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:53.530839    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.530839    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:53.534661    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:53.567189    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.567189    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:53.571412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:53.598763    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.598763    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:53.602673    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:53.629791    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.629791    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:53.632953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:53.662323    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.662323    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:53.665394    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:53.695745    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.695745    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:53.701403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:53.735348    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.735348    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:53.735348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:53.735348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:53.816532    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.816532    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:53.816532    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:53.843453    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:53.843453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:53.893853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:53.893853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:53.954759    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:53.954759    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.499506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:56.525316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:56.561689    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.561738    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:56.565616    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:56.594009    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.594009    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:56.599822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:56.624101    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.624101    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:56.628604    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:56.657977    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.658063    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:56.663240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:56.694316    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.694316    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:56.698763    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:56.728527    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.728527    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:56.734446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:56.765315    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.765315    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:56.769182    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:56.796198    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.796198    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:56.796198    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:56.796198    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:56.864777    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:56.864777    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.904264    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:56.904264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:57.000434    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:57.000434    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:57.000434    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:57.034757    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:57.034842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:59.601768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:59.627731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:59.657009    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.657009    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:59.660962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:59.690428    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.690428    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:59.694181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:59.723517    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.723592    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:59.727191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:59.756251    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.756251    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:59.759627    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:59.791516    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.791516    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:59.795602    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:59.828192    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.828192    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:59.832003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:59.860258    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.860258    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:59.863635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:59.893207    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.893207    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:59.893207    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:59.893207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:59.958927    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:59.958927    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:00.004703    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:00.004703    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:00.096612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:00.096612    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:00.096612    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:00.124914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:00.124975    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:02.682962    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:02.708543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:02.737663    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.737663    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:02.741817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:02.772482    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.772482    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:02.778562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:02.806978    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.806978    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:02.813021    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:02.845688    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.845688    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:02.851578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:02.880144    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.880200    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:02.883811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:02.918466    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.918544    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:02.922186    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:02.951702    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.951702    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:02.955491    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:02.984638    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.984638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:02.984638    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:02.984638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:03.047941    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:03.047941    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:03.086964    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:03.086964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:03.173007    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:03.173086    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:03.173086    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:03.202017    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:03.202544    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:05.761010    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:05.786319    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:05.819785    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.819785    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:05.825532    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:05.853318    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.853318    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:05.858274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:05.887613    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.887613    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:05.891162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:05.919471    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.919471    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:05.922933    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:05.955441    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.955441    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:05.959241    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:05.984925    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.984925    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:05.989009    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:06.021101    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.021101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:06.024383    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:06.055098    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.055098    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:06.055098    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:06.055098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:06.107743    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:06.107743    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:06.170319    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:06.170319    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:06.210360    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:06.210360    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:06.299194    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:06.299194    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:06.299194    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:08.832901    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:08.860263    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:08.890111    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.890111    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:08.893617    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:08.921989    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.921989    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:08.925561    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:08.952883    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.952883    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:08.959516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:08.991347    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.991347    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:08.995066    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:09.028011    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.028011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:09.032096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:09.060803    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.060803    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:09.064596    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:09.093542    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.093572    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:09.096987    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:09.123594    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.123615    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:09.123615    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:09.123615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:09.176222    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:09.176222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:09.238935    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:09.238935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:09.278804    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:09.278804    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:09.367283    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:09.367283    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:09.367283    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:11.901781    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:11.930493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:11.963534    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.963534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:11.967747    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:11.997700    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.997700    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:12.001601    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:12.031862    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.031862    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:12.035544    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:12.066506    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.066506    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:12.071472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:12.103184    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.103184    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:12.107033    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:12.135713    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.135713    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:12.139268    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:12.170350    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.170350    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:12.174053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:12.202964    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.202964    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:12.202964    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:12.202964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:12.252669    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:12.253197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:12.318088    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:12.318088    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:12.356768    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:12.356768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:12.443857    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:12.443857    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:12.443857    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:14.980350    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:15.007303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:15.040020    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.040100    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:15.043303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:15.073502    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.073502    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:15.077944    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:15.106871    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.106871    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:15.110453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:15.138071    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.138095    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:15.141547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:15.171602    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.171659    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:15.175341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:15.207140    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.207181    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:15.210547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:15.243222    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.243222    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:15.247103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:15.280156    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.280232    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:15.280232    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:15.280232    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:15.342862    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:15.342862    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:15.384022    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:15.384022    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:15.469724    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:15.469766    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:15.469807    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:15.497606    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:15.497667    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:18.064895    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:18.090410    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:18.123378    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.123429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:18.127331    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:18.157210    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.157210    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:18.160924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:18.191242    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.191242    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:18.195064    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:18.222561    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.222561    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:18.226125    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:18.255891    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.255891    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:18.259860    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:18.288868    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.288868    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:18.292834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:18.322668    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.322668    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:18.325666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:18.353052    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.353052    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:18.353052    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:18.353052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:18.418504    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:18.418504    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:18.457348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:18.457348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:18.568946    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:18.569003    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:18.569003    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:18.602236    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:18.602236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:21.158752    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:21.184475    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:21.214582    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.214582    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:21.218375    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:21.245604    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.245604    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:21.249850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:21.281360    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.281360    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:21.286501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:21.318549    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.318601    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:21.322609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:21.353429    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.353460    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:21.357031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:21.391028    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.391028    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:21.394206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:21.423584    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.423584    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:21.427599    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:21.458683    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.458683    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:21.458683    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:21.458683    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:21.526430    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:21.526430    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:21.565490    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:21.565490    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:21.656323    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:21.656323    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:21.656323    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:21.689700    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:21.689700    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:24.246630    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:24.280925    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:24.322972    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.322972    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:24.326768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:24.355732    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.355732    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:24.359957    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:24.391937    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.392009    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:24.395559    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:24.427388    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.427388    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:24.431126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:24.459891    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.459966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:24.463468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:24.491009    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.491009    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:24.494465    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:24.524468    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.524468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:24.528017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:24.568815    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.568815    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:24.568815    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:24.568815    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:24.632772    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:24.632772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:24.671731    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:24.671731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:24.755604    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:24.755604    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:24.755604    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:24.784599    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:24.784660    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:27.338272    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:27.366367    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:27.395715    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.395715    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:27.399158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:27.427362    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.427362    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:27.430752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:27.461990    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.461990    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:27.465748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:27.492985    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.492985    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:27.497176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:27.528724    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.528724    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:27.532970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:27.571655    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.571655    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:27.575466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:27.604007    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.604068    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:27.608062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:27.639624    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.639689    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:27.639735    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:27.639735    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:27.705896    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:27.705896    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:27.745294    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:27.745294    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:27.827462    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:27.827462    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:27.827462    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:27.854463    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:27.854559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:30.412283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:30.438474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:30.469848    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.469848    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:30.473330    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:30.501713    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.501713    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:30.505748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:30.535870    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.535870    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:30.540177    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:30.572310    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.572310    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:30.576644    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:30.607087    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.607087    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:30.610334    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:30.640168    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.640168    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:30.643628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:30.671132    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.671132    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:30.677927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:30.708536    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.708536    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:30.708536    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:30.708536    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:30.773222    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:30.773222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:30.812763    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:30.812763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:30.932347    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:30.932397    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:30.932444    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:30.961663    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:30.961663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.524404    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:33.548624    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:33.580753    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.580845    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:33.583912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:33.613001    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.613048    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:33.616808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:33.645262    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.645262    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:33.649044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:33.677477    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.677562    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:33.681205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:33.710607    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.710669    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:33.714600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:33.742889    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.742889    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:33.746623    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:33.777022    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.777022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:33.780455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:33.809525    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.809525    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:33.809525    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:33.809525    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.860852    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:33.860936    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:33.924768    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:33.924768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:33.962632    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:33.962632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:34.054124    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:34.054124    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:34.054124    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.589465    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:36.617658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:36.652432    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.652432    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:36.656189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:36.694709    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.694709    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:36.700040    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:36.729913    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.729913    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:36.733870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:36.762591    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.762591    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:36.766493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:36.796414    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.796414    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:36.800540    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:36.828148    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.828148    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:36.833323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:36.862390    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.862390    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:36.866173    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:36.895727    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.895814    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:36.895814    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:36.895814    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.926240    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:36.926240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:36.975760    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:36.975760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:37.036350    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:37.036350    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:37.072745    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:37.072745    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:37.161612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:39.667288    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:39.691212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:39.724148    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.724148    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:39.727935    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:39.761821    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.761821    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:39.765852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:39.793659    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.793696    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:39.797422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:39.825439    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.825473    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:39.828751    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:39.859011    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.859011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:39.862518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:39.891552    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.891613    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:39.894978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:39.926857    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.926857    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:39.930363    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:39.975835    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.975835    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:39.975835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:39.975835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:40.070107    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:40.070107    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:40.070107    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:40.098563    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:40.098605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:40.147476    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:40.147476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:40.212702    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:40.212702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:42.757339    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:42.786178    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:42.817429    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.817429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:42.821164    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:42.850363    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.850415    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:42.854031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:42.881774    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.881774    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:42.885802    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:42.915556    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.915556    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:42.919184    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:42.948329    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.948329    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:42.952430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:42.982355    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.982355    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:42.986768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:43.017700    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.017700    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:43.021284    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:43.052749    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.052779    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:43.052779    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:43.052813    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:43.091605    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:43.091605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:43.175861    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:43.175861    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:43.175861    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:43.204569    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:43.204569    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:43.257132    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:43.257132    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:45.825092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:45.853653    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:45.886780    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.886780    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:45.890416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:45.921840    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.923184    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:45.928382    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:45.960187    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.960252    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:45.963959    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:45.993658    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.993712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:45.997113    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:46.024308    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.024308    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:46.027994    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:46.060725    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.060725    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:46.064446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:46.092825    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.092825    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:46.098256    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:46.129614    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.129688    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:46.129688    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:46.129688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:46.216242    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:46.216263    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:46.216263    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:46.248767    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:46.248767    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:46.298044    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:46.298044    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:46.363186    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:46.363186    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:48.911992    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:48.946588    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:48.983880    6296 logs.go:282] 0 containers: []
	W1217 02:13:48.983880    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:48.987999    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:49.017254    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.017254    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:49.021239    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:49.053619    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.053619    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:49.057711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:49.086289    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.086289    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:49.090230    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:49.123069    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.123069    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:49.130107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:49.158724    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.158724    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:49.162733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:49.193515    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.193573    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:49.197116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:49.230153    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.230201    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:49.230245    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:49.230245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:49.259747    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:49.259747    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:49.312360    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:49.312456    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:49.375035    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:49.375035    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:49.413908    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:49.413908    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:49.508187    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:52.012834    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:52.037104    6296 out.go:203] 
	W1217 02:13:52.039462    6296 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:13:52.039520    6296 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:13:52.039588    6296 out.go:285] * Related issues:
	W1217 02:13:52.039588    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:13:52.039635    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:13:52.041923    6296 out.go:203] 
	
	
	==> Docker <==
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325544488Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325628897Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325641498Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325647799Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325653800Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325676802Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325716506Z" level=info msg="Initializing buildkit"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.423454913Z" level=info msg="Completed buildkit initialization"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434194190Z" level=info msg="Daemon has completed initialization"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434389711Z" level=info msg="API listen on [::]:2376"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434491222Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 02:05:13 no-preload-184000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434476421Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 02:05:14 no-preload-184000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Loaded network plugin cni"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 02:05:14 no-preload-184000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:20:23.633776   17190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:20:23.634960   17190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:20:23.636134   17190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:20:23.637458   17190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:20:23.638558   17190 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.752411] CPU: 12 PID: 469779 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8b9b910b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f8b9b910af6.
	[  +0.000001] RSP: 002b:00007fffc85e9670 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.875329] CPU: 10 PID: 469916 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fdfac8dab20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fdfac8daaf6.
	[  +0.000001] RSP: 002b:00007ffd587a0060 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:20:23 up  2:39,  0 user,  load average: 0.22, 0.47, 1.46
	Linux no-preload-184000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:20:20 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:20 no-preload-184000 kubelet[16997]: E1217 02:20:20.788113   16997 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:20:20 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:20:20 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:20:21 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1208.
	Dec 17 02:20:21 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:21 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:21 no-preload-184000 kubelet[17025]: E1217 02:20:21.532418   17025 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:20:21 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:20:21 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:20:22 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1209.
	Dec 17 02:20:22 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:22 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:22 no-preload-184000 kubelet[17054]: E1217 02:20:22.261158   17054 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:20:22 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:20:22 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:20:22 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1210.
	Dec 17 02:20:22 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:22 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:23 no-preload-184000 kubelet[17067]: E1217 02:20:23.057011   17067 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:20:23 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:20:23 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:20:23 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1211.
	Dec 17 02:20:23 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:20:23 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 2 (575.1266ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (545.37s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (13.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p newest-cni-383500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (611.1592ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-pause apiserver status = "Stopped"; want = "Paused"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-383500 -n newest-cni-383500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (571.8244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p newest-cni-383500 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500
E1217 02:14:01.851971    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (609.2662ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause apiserver status = "Stopped"; want = "Running"
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-383500 -n newest-cni-383500
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (563.8257ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: post-unpause kubelet status = "Stopped"; want = "Running"
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-383500
helpers_test.go:244: (dbg) docker inspect newest-cni-383500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638",
	        "Created": "2025-12-17T01:57:11.100405677Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 462672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:07:38.479713902Z",
	            "FinishedAt": "2025-12-17T02:07:35.952064424Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hostname",
	        "HostsPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hosts",
	        "LogPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638-json.log",
	        "Name": "/newest-cni-383500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-383500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-383500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-383500",
	                "Source": "/var/lib/docker/volumes/newest-cni-383500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-383500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-383500",
	                "name.minikube.sigs.k8s.io": "newest-cni-383500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1db633168a5c321973d71a3d7a937d0960662192a945d2448f4398b25b744030",
	            "SandboxKey": "/var/run/docker/netns/1db633168a5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63784"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-383500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0a3f566cb0e1e68eaf85fc99a3ee131940651a4c9a15e291bc077be33f07b4e",
	                    "EndpointID": "d5e1ca0ef443df8c9e41596f8db19fb0cd842fc42e6efd30a71aaa1d3fefb2d9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-383500",
	                        "58edac260513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (581.634ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25: (1.6847206s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ stop    │ -p newest-cni-383500 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-383500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │                     │
	│ image   │ newest-cni-383500 image list --format=json                                                                                                                                                                                 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ pause   │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ unpause │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:07:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:07:37.336708    6296 out.go:360] Setting OutFile to fd 968 ...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.380113    6296 out.go:374] Setting ErrFile to fd 1700...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.394455    6296 out.go:368] Setting JSON to false
	I1217 02:07:37.396490    6296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8845,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:07:37.397485    6296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:07:37.401853    6296 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:07:37.405009    6296 notify.go:221] Checking for updates...
	I1217 02:07:37.407761    6296 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:37.412054    6296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:07:37.415031    6296 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:07:37.416942    6296 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:07:37.418887    6296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 02:07:37.439676    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:37.422499    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:37.422499    6296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:07:37.541250    6296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:07:37.544536    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:37.790862    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:37.763465755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:37.793941    6296 out.go:179] * Using the docker driver based on existing profile
	I1217 02:07:37.795944    6296 start.go:309] selected driver: docker
	I1217 02:07:37.795944    6296 start.go:927] validating driver "docker" against &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:37.796941    6296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:07:37.881125    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:38.106129    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:38.085504737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:38.106129    6296 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:07:38.106129    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:38.106661    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:38.106789    6296 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:38.110370    6296 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 02:07:38.113499    6296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:07:38.115628    6296 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:07:38.118799    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:38.118867    6296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:07:38.118972    6296 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 02:07:38.119036    6296 cache.go:65] Caching tarball of preloaded images
	I1217 02:07:38.119094    6296 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 02:07:38.119094    6296 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 02:07:38.119094    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.197259    6296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:07:38.197259    6296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:07:38.197259    6296 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:07:38.197259    6296 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:07:38.197259    6296 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-383500"
	I1217 02:07:38.197259    6296 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:07:38.197259    6296 fix.go:54] fixHost starting: 
	I1217 02:07:38.204499    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.259240    6296 fix.go:112] recreateIfNeeded on newest-cni-383500: state=Stopped err=<nil>
	W1217 02:07:38.259240    6296 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:07:38.262335    6296 out.go:252] * Restarting existing docker container for "newest-cni-383500" ...
	I1217 02:07:38.265716    6296 cli_runner.go:164] Run: docker start newest-cni-383500
	I1217 02:07:38.804123    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.863188    6296 kic.go:430] container "newest-cni-383500" state is running.
	I1217 02:07:38.868900    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:38.924169    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.926083    6296 machine.go:94] provisionDockerMachine start ...
	I1217 02:07:38.928987    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:38.984001    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:38.984993    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:38.984993    6296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:07:38.986003    6296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:07:42.161557    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.161646    6296 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 02:07:42.166827    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.231443    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.231698    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.231698    6296 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 02:07:42.423907    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.432743    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.491085    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.491085    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.491085    6296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:07:42.667009    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:42.667009    6296 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:07:42.667009    6296 ubuntu.go:190] setting up certificates
	I1217 02:07:42.667009    6296 provision.go:84] configureAuth start
	I1217 02:07:42.671320    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:42.724474    6296 provision.go:143] copyHostCerts
	I1217 02:07:42.725072    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:07:42.725072    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:07:42.725072    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:07:42.726229    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:07:42.726229    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:07:42.726812    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:07:42.727386    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:07:42.727386    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:07:42.727386    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:07:42.728644    6296 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 02:07:42.882778    6296 provision.go:177] copyRemoteCerts
	I1217 02:07:42.886667    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:07:42.889412    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.946034    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:43.080244    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:07:43.111350    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:07:43.145228    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:07:43.176328    6296 provision.go:87] duration metric: took 509.312ms to configureAuth
	I1217 02:07:43.176328    6296 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:07:43.176328    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:43.180705    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.236378    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.237514    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.237514    6296 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:07:43.404492    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:07:43.404492    6296 ubuntu.go:71] root file system type: overlay
	I1217 02:07:43.405056    6296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:07:43.408624    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.465282    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.465408    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.465408    6296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:07:43.658319    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:07:43.662395    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.719191    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.719552    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.719552    6296 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:07:43.890999    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:43.890999    6296 machine.go:97] duration metric: took 4.9648419s to provisionDockerMachine
	I1217 02:07:43.890999    6296 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 02:07:43.890999    6296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:07:43.895385    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:07:43.899109    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.952181    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.085157    6296 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:07:44.092998    6296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:07:44.093086    6296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:07:44.093086    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:07:44.093465    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:07:44.094379    6296 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:07:44.099969    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:07:44.115031    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:07:44.146317    6296 start.go:296] duration metric: took 255.2637ms for postStartSetup
	I1217 02:07:44.150381    6296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:07:44.153098    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.206142    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.337637    6296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:07:44.346313    6296 fix.go:56] duration metric: took 6.1489614s for fixHost
	I1217 02:07:44.346313    6296 start.go:83] releasing machines lock for "newest-cni-383500", held for 6.1489614s
	I1217 02:07:44.350643    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:44.409164    6296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:07:44.413957    6296 ssh_runner.go:195] Run: cat /version.json
	I1217 02:07:44.414540    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.416694    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.466739    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.469418    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 02:07:44.591848    6296 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:07:44.598090    6296 ssh_runner.go:195] Run: systemctl --version
	I1217 02:07:44.614283    6296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:07:44.624324    6296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:07:44.628955    6296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:07:44.642200    6296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:07:44.642243    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:44.642333    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:44.642453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:44.671216    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:07:44.689408    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:07:44.702919    6296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:07:44.707856    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:07:44.727869    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.747180    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1217 02:07:44.751020    6296 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:07:44.751020    6296 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:07:44.766866    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.786853    6296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:07:44.806986    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:07:44.828346    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:07:44.848400    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:07:44.870349    6296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:07:44.887217    6296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:07:44.905216    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.047629    6296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:07:45.203749    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:45.203842    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:45.209421    6296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:07:45.236823    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.259331    6296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:07:45.337368    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.361492    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:07:45.381383    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:45.409600    6296 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:07:45.421762    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:07:45.435668    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:07:45.461708    6296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:07:45.616228    6296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:07:45.751670    6296 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:07:45.751670    6296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:07:45.778504    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:07:45.800985    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.956342    6296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:07:46.816501    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:07:46.840410    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:07:46.865817    6296 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:07:46.890943    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:46.914319    6296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:07:47.058242    6296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:07:47.214522    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.355565    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	W1217 02:07:47.472644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:47.382801    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:07:47.407455    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.558893    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:07:47.666138    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:47.686246    6296 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:07:47.690618    6296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:07:47.697013    6296 start.go:564] Will wait 60s for crictl version
	I1217 02:07:47.702316    6296 ssh_runner.go:195] Run: which crictl
	I1217 02:07:47.713878    6296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:07:47.755301    6296 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:07:47.758809    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.803772    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.845573    6296 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:07:47.849368    6296 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 02:07:47.978778    6296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:07:47.983162    6296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:07:47.993198    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.011887    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:48.072090    6296 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:07:48.073820    6296 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:07:48.073820    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:48.077080    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.110342    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.110411    6296 docker.go:621] Images already preloaded, skipping extraction
	I1217 02:07:48.113821    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.144461    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.144530    6296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:07:48.144530    6296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:07:48.144779    6296 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:07:48.149102    6296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:07:48.225894    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:48.225894    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:48.225894    6296 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:07:48.225894    6296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:07:48.226504    6296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:07:48.230913    6296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:07:48.243749    6296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:07:48.248634    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:07:48.262382    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:07:48.284386    6296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:07:48.306623    6296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 02:07:48.332101    6296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:07:48.341865    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.360919    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:48.498620    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:48.520308    6296 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 02:07:48.520346    6296 certs.go:195] generating shared ca certs ...
	I1217 02:07:48.520390    6296 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:48.520420    6296 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:07:48.521152    6296 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:07:48.521359    6296 certs.go:257] generating profile certs ...
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 02:07:48.522472    6296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 02:07:48.523217    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:07:48.523515    6296 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:07:48.523598    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:07:48.523888    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:07:48.524140    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:07:48.524399    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:07:48.525045    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:07:48.526649    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:07:48.558725    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:07:48.590333    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:07:48.621493    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:07:48.650907    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:07:48.678948    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:07:48.708871    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:07:48.738822    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:07:48.769873    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:07:48.801411    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:07:48.828208    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:07:48.859551    6296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:07:48.888197    6296 ssh_runner.go:195] Run: openssl version
	I1217 02:07:48.903194    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.920018    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:07:48.936734    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.943690    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.948571    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.997651    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:07:49.015514    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.035513    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:07:49.056511    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.065394    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.070742    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.117805    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:07:49.140198    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.156992    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:07:49.175485    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.184194    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.187479    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.237543    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:07:49.254809    6296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:07:49.269508    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:07:49.317073    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:07:49.365797    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:07:49.413853    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:07:49.462871    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:07:49.515512    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:07:49.558666    6296 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:49.563317    6296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:07:49.602899    6296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:07:49.616365    6296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:07:49.616365    6296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:07:49.622022    6296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:07:49.637152    6296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:07:49.641090    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.693295    6296 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.693843    6296 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-383500" cluster setting kubeconfig missing "newest-cni-383500" context setting]
	I1217 02:07:49.694722    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.716755    6296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:07:49.731850    6296 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:07:49.731850    6296 kubeadm.go:602] duration metric: took 115.4836ms to restartPrimaryControlPlane
	I1217 02:07:49.731850    6296 kubeadm.go:403] duration metric: took 173.1816ms to StartCluster
	I1217 02:07:49.731850    6296 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.731850    6296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.732839    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.734654    6296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:07:49.734654    6296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:07:49.734654    6296 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:70] Setting dashboard=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:49.734654    6296 addons.go:70] Setting default-storageclass=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.734654    6296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon dashboard=true in "newest-cni-383500"
	W1217 02:07:49.734654    6296 addons.go:248] addon dashboard should already be in state true
	I1217 02:07:49.735179    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.739634    6296 out.go:179] * Verifying Kubernetes components...
	I1217 02:07:49.743427    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.745812    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:49.809135    6296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:07:49.809532    6296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:07:49.812989    6296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:49.812989    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:07:49.816981    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.817010    6296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:07:49.818467    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:07:49.818467    6296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:07:49.823270    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.824987    6296 addons.go:239] Setting addon default-storageclass=true in "newest-cni-383500"
	I1217 02:07:49.825100    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.836645    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.889991    6296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:49.889991    6296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:07:49.892991    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.925992    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:49.943010    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.950996    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:50.005058    6296 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:07:50.009064    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.011068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.014077    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:07:50.014077    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:07:50.034057    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:07:50.034057    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:07:50.102553    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:07:50.102611    6296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:07:50.106900    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.124027    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:07:50.124027    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:07:50.189590    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:07:50.189677    6296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1217 02:07:50.190082    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.190082    6296 retry.go:31] will retry after 343.200838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.212250    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:07:50.212311    6296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:07:50.231619    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:07:50.231619    6296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:07:50.241078    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.241078    6296 retry.go:31] will retry after 338.608253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.254747    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:07:50.254794    6296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:07:50.277655    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:50.277655    6296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:07:50.303268    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.381205    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.381205    6296 retry.go:31] will retry after 204.689537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.510673    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.538343    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.585518    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.590250    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.625635    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.625793    6296 retry.go:31] will retry after 198.686568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.703247    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.703247    6296 retry.go:31] will retry after 199.792365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.713669    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.714671    6296 retry.go:31] will retry after 441.125735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.831068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.910787    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:50.921027    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.921027    6296 retry.go:31] will retry after 637.088373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.993148    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.993148    6296 retry.go:31] will retry after 819.774881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.009768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.161082    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:51.282295    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.282369    6296 retry.go:31] will retry after 677.278565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.510844    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.563702    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:51.642986    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.642986    6296 retry.go:31] will retry after 1.231128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.817677    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:51.902470    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.902470    6296 retry.go:31] will retry after 1.160161898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.964724    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:52.009393    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:52.053520    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.053520    6296 retry.go:31] will retry after 497.775491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.510530    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:52.556698    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:52.641425    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.641425    6296 retry.go:31] will retry after 893.419079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.880811    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:52.961643    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.961643    6296 retry.go:31] will retry after 1.354718896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.009905    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.068292    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:53.159843    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.159885    6296 retry.go:31] will retry after 830.811591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.510300    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.539679    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:53.634195    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.634195    6296 retry.go:31] will retry after 1.875797166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.997012    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:54.010116    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:54.085004    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.085004    6296 retry.go:31] will retry after 2.403477641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.321510    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:54.401677    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.401677    6296 retry.go:31] will retry after 2.197762331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.509750    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.011577    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.509949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.514301    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:55.590724    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:55.590724    6296 retry.go:31] will retry after 3.771224323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.010995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:56.493760    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:56.509755    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:56.580067    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.580067    6296 retry.go:31] will retry after 2.862008002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.606008    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:56.692846    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.693375    6296 retry.go:31] will retry after 3.419223727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:57.009866    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:57.510945    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:57.510327    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.010333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.511391    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.013796    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.367655    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:59.447582    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:59.457416    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.457416    6296 retry.go:31] will retry after 6.254269418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.510215    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:59.536524    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.536524    6296 retry.go:31] will retry after 4.240139996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.010517    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.118263    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:00.197472    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.197472    6296 retry.go:31] will retry after 5.486941273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.511349    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.012031    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.510877    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.510995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.511479    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.781390    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:03.867561    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:03.867561    6296 retry.go:31] will retry after 5.255488401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:04.011296    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:04.510695    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.011055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.510174    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.690069    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:05.718147    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:05.792389    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.792389    6296 retry.go:31] will retry after 3.294946391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:05.802187    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.802187    6296 retry.go:31] will retry after 6.599881974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:06.010721    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.509941    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.010092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:07.543861    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:07.511303    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.011059    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.511015    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.009909    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.092821    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:09.127423    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:09.180638    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.180716    6296 retry.go:31] will retry after 13.056189647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:09.211988    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.212069    6296 retry.go:31] will retry after 13.872512266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.510829    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.010907    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.513112    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.010572    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.509543    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.010570    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.409071    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:12.497495    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.497495    6296 retry.go:31] will retry after 9.788092681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.510004    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.011338    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.509984    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.010499    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.511126    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.010949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.511741    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.011278    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.511157    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.010863    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:17.577088    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:17.511273    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.010782    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.510594    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.011193    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.512050    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.011700    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.511001    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.010461    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.510457    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.011002    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.242227    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:22.290434    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:22.384800    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.384884    6296 retry.go:31] will retry after 11.75975207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:22.424758    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.424758    6296 retry.go:31] will retry after 15.557196078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.510556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.011645    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.090496    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:23.176544    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.176625    6296 retry.go:31] will retry after 13.26458747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.510872    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.011245    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.511483    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.011656    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.510967    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.012125    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.512672    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.011155    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:27.612061    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:27.512368    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.010889    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.511767    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.011035    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.512111    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.010919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.510464    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.010433    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.511392    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.010680    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.510963    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.011818    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.511638    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.011591    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.151810    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:34.242474    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.242474    6296 retry.go:31] will retry after 23.644538854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.513602    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.011269    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.511142    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.011267    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.446774    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:08:36.511283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:36.541778    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:36.541860    6296 retry.go:31] will retry after 14.024805043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:37.010743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:37.653192    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:37.510520    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.987959    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:08:38.011587    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:38.113276    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.113276    6296 retry.go:31] will retry after 20.609884455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.511817    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.012624    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.511353    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.011079    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.511636    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.011582    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.512671    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.011503    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.511640    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.011054    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.510485    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.011395    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.511333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.011435    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.513316    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.012600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.512307    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.012227    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.512888    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.011996    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.511276    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.011053    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.511776    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.011678    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:50.050889    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.050889    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.055201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:50.085770    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.085770    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.090316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:50.123762    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.123762    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.127529    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:50.157626    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.157626    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.163652    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:50.189945    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.189945    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.193620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:50.222819    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.222866    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.227818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:50.256909    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.256909    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.260970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:50.290387    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.290387    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.290387    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:50.290387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.357876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.357876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.420098    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.420098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.460376    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.460376    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.542989    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.542989    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:50.542989    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:50.570331    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:50.645772    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:50.645772    6296 retry.go:31] will retry after 16.344343138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:47.695483    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:53.075519    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.098924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:53.131675    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.131675    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.135542    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:53.166511    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.166511    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.170265    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:53.198547    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.198547    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.202694    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:53.232459    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.232459    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.235758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:53.263802    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.263802    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.268318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:53.296956    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.296956    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.301349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:53.331331    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.331331    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.335255    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:53.367520    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.367550    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.367577    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.367602    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.453750    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.453837    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:53.453887    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:53.485058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.485058    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.540050    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.540050    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.604101    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.604101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.146858    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.172227    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:56.203897    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.203941    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.207562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:56.236114    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.236114    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.240341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:56.274958    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.274958    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.280577    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:56.308906    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.308906    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.312811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:56.340777    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.340836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.343843    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:56.371408    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.371441    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.374771    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:56.406487    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.406487    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.410973    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:56.441247    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.441247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.441247    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.441247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.506877    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.506877    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.548841    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.548841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.633101    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.633101    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:56.633101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:56.659421    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.659457    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:57.892877    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:57.970838    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:57.970838    6296 retry.go:31] will retry after 27.385193451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.728649    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:58.834139    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.834680    6296 retry.go:31] will retry after 32.13321777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:59.213728    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.238361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:59.266298    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.266298    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.270295    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:59.299414    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.299414    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.302581    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:59.335627    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.335627    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.339238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:59.367042    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.367042    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.371258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:59.401507    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.401507    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.405468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:59.436657    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.436657    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.440955    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:59.471027    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.471027    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.474047    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:59.505164    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.505164    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.505164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:59.505164    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:59.533835    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.533835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.586695    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.587671    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.648841    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.648841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.688691    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.688691    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.777044    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.282707    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.307570    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:02.340326    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.340412    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.343993    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:02.374035    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.374079    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.377688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1217 02:08:57.736771    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:02.409724    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.409724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.414154    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:02.442993    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.442993    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.447591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:02.474966    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.474966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.479447    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:02.511675    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.511675    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.515939    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:02.544034    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.544034    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.548633    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:02.578196    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.578196    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.578196    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.578196    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.642449    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.643420    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:02.681562    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.681562    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.766017    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.766017    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:02.766017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:02.795166    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.795166    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.347132    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.372840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:05.424611    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.424686    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.428337    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:05.461682    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.461682    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.465790    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:05.495395    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.495395    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.499215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:05.528620    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.528620    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.532226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:05.560375    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.560375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.564119    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:05.595214    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.595214    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.600088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:05.633183    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.633183    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.636776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:05.664840    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.664840    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.664840    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.664840    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.718503    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.718503    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.781489    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.781489    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.821081    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.821081    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.905451    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.905451    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:05.905451    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:06.996471    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:09:07.077056    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:07.077056    6296 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:08.443326    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.470285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:08.499191    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.499191    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.503346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:08.531727    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.531727    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.535874    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:08.567724    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.567724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.571504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:08.601814    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.601814    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.605003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:08.638738    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.638815    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.642116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:08.672949    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.672949    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.676953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:08.706081    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.706145    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.709298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:08.737856    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.737856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.737856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.737856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.798236    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.798236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.838053    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.838053    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.925271    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.925271    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:08.925271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:08.952860    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.952934    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.505032    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.532273    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:11.560855    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.560907    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.564808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:11.595967    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.596024    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.599911    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:11.628443    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.628443    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.632103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:11.659899    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.659899    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.663896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:11.695830    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.695864    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.699333    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:11.728245    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.728314    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.731834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:11.762004    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.762038    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.765497    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:11.800437    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.800437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.800437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.800437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.850659    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.850659    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:11.927328    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.927328    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.968115    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.968115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:12.061366    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:12.061366    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:12.061366    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:07.775163    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:14.593463    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.619698    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:14.649625    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.649625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.653809    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:14.682807    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.682865    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.686225    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:14.716867    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.716867    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.720947    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:14.748712    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.748712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.753598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:14.786467    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.786467    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.790745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:14.820388    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.820388    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.824364    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:14.856683    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.856715    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.860387    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:14.907334    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.907388    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.907388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.907388    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.970536    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.971543    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:15.009837    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:15.009837    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:15.100833    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:15.100833    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:15.100833    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:15.129774    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:15.129838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.687506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.711884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:17.740676    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.740676    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.743807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:17.775526    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.775598    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.779196    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:17.810564    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.810564    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.815366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:17.847149    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.847149    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.850304    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:17.880825    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.880825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.884416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:17.913663    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.913663    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.917519    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:17.949675    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.949736    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.953399    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:17.981777    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.981777    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.981853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.981853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:18.045143    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:18.045143    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:18.085682    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:18.085682    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:18.174824    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:18.174862    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:18.174890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:18.201721    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:18.201721    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:20.754573    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.779418    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:20.815289    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.815336    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.821329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:20.849494    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.849566    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.853416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:20.886139    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.886213    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.890864    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:20.921623    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.921691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.925413    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:20.955001    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.955030    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.959115    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:20.986446    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.986446    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.990622    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:21.019381    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.019903    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:21.023386    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:21.049708    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.049708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:21.049708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:21.049708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:21.114512    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:21.114512    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:21.154312    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:21.154312    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:21.241835    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:21.241835    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:21.241835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:21.269935    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:21.269935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:17.811223    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:23.827385    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.851293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:23.884017    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.884017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.887852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:23.920819    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.920819    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.925124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:23.953397    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.953468    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.957090    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:23.987965    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.987965    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.992238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:24.021188    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.021188    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:24.027472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:24.059066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.059066    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:24.062927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:24.092066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.092066    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:24.096083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:24.130020    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.130083    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:24.130083    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:24.130083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:24.193264    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:24.193264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:24.233590    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:24.233590    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:24.334738    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:24.334738    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:24.334738    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:24.361711    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:24.361711    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:25.361736    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:09:25.443830    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:25.443830    6296 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:26.915928    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.940552    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:26.972265    6296 logs.go:282] 0 containers: []
	W1217 02:09:26.972334    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.975468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:27.004131    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.004131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:27.007688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:27.040755    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.040755    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:27.044298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:27.075607    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.075607    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:27.079764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:27.109726    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.109726    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:27.113807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:27.142060    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.142060    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:27.145049    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:27.179827    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.179898    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:27.183340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:27.212340    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.212340    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:27.212340    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:27.212340    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:27.290453    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:27.290453    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:27.290453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:27.317919    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:27.317919    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:27.372636    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:27.372636    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:27.434881    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:27.434881    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.980965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:30.007081    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:30.038766    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.038766    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:30.042837    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:30.074216    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.074277    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:30.077495    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:30.109815    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.109815    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:30.113543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:30.144692    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.144692    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:30.148595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:30.181530    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.181530    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:30.185056    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:30.230054    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.230054    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:30.233965    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:30.264421    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.264421    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:30.268191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:30.302463    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.302463    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:30.302463    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:30.302463    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:30.369905    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:30.369905    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:30.407364    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:30.407364    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:30.501045    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:30.501045    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:30.501045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:30.529058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:30.529119    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:30.973740    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:09:31.053832    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:31.053832    6296 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:31.057712    6296 out.go:179] * Enabled addons: 
	I1217 02:09:31.060716    6296 addons.go:530] duration metric: took 1m41.3245326s for enable addons: enabled=[]
	W1217 02:09:27.847902    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:33.093000    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:33.117479    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:33.148299    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.148299    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:33.152403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:33.180747    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.180747    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:33.184258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:33.214319    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.214389    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:33.217921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:33.244463    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.244463    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:33.248882    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:33.280520    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.280573    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:33.284251    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:33.313836    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.313883    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:33.318949    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:33.351545    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.351545    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:33.355242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:33.384638    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.384638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:33.384638    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:33.384638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:33.438624    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:33.438624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:33.503148    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:33.504145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:33.542770    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:33.542770    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:33.628872    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:33.628872    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:33.628872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:36.163766    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:36.190660    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:36.219485    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.219485    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:36.223169    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:36.253826    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.253826    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:36.257584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:36.289684    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.289684    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:36.293455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:36.321228    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.321228    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:36.326076    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:36.355893    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.355893    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:36.360432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:36.392307    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.392359    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:36.395377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:36.427797    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.427797    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:36.431432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:36.465462    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.465547    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:36.465590    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:36.465605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:36.515585    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:36.515688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:36.577828    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:36.577828    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:36.617923    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:36.617923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:36.706865    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:36.706865    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:36.706865    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.240583    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:39.269426    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:39.300548    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.300548    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:39.304455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:39.337640    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.337640    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:39.341427    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:39.375280    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.375280    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:39.379328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:39.408206    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.408291    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:39.413138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:39.439760    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.439760    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:39.443728    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:39.470865    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.471120    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:39.477630    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:39.510101    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.510101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:39.515759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:39.545423    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.545494    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:39.545494    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:39.545559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.574474    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:39.574474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:39.627410    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:39.627410    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:39.687852    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:39.687852    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:39.730823    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:39.730823    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:39.820771    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.326489    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:42.349989    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:42.381673    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.381673    6296 logs.go:284] No container was found matching "kube-apiserver"
	W1217 02:09:37.889672    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:42.385392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:42.414575    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.414575    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:42.418510    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:42.452120    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.452120    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:42.456157    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:42.484625    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.484625    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:42.487782    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:42.520235    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.520235    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:42.525546    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:42.558589    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.558589    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:42.561770    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:42.592364    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.592364    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:42.596368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:42.625522    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.625522    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:42.625522    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:42.625522    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:42.661616    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:42.661616    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:42.748046    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.748046    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:42.748046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:42.778854    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:42.778854    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:42.827860    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:42.827860    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.394220    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:45.418501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:45.453084    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.453132    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:45.457433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:45.491679    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.491679    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:45.495517    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:45.524934    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.524934    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:45.528788    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:45.559787    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.559837    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:45.563714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:45.608019    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.608104    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:45.612132    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:45.639869    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.639869    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:45.644002    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:45.671767    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.671767    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:45.675466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:45.704056    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.704104    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:45.704104    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:45.704104    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.766557    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:45.766557    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:45.807449    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:45.807449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:45.898686    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:45.898686    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:45.898686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:45.924614    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:45.924614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.482563    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:48.510137    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:48.546063    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.546063    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:48.551905    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:48.588536    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.588617    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:48.592628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:48.621540    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.621540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:48.625701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:48.653505    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.653505    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:48.659485    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:48.688940    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.689008    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:48.692649    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:48.718858    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.718858    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:48.722907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:48.752451    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.752451    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:48.755913    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:48.785865    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.785903    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:48.785903    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:48.785948    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.842730    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:48.843261    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:48.905352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:48.905352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:48.945271    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:48.945271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.027913    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.027963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:49.027963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:51.563182    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:51.587223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:51.619597    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.619621    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:51.623355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:51.652069    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.652152    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:51.655716    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:51.684602    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.684653    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:51.687735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:51.716327    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.716327    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:51.720054    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:51.750202    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.750266    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:51.753821    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:51.781863    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.781863    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:51.785648    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:51.814791    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.814841    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:51.818565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:51.850654    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.850654    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:51.850654    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:51.850654    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:51.912429    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:51.912429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:51.951795    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:51.951795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.035486    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.035486    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:52.035486    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:52.063472    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.063472    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:47.930106    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:54.631678    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:54.657392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:54.689037    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.689037    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:54.692460    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:54.723231    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.723231    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:54.729158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:54.759168    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.759168    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:54.762883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:54.792371    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.792371    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:54.796165    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:54.828375    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.828375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:54.832201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:54.862409    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.862476    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:54.866107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:54.897161    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.897161    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:54.900834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:54.947452    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.947452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:54.947452    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:54.947452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.016411    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.016411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.055628    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.055628    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.152557    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.152599    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:55.152599    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:55.180492    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.180492    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:57.741989    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:57.768328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:57.799200    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.799200    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:57.803065    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:57.832042    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.832042    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:57.835921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:57.863829    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.863891    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:57.867347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:57.896797    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.896822    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:57.900369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:57.929832    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.929907    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:57.933326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:57.960278    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.960278    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:57.964215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:57.992277    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.992324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:57.995951    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:58.026155    6296 logs.go:282] 0 containers: []
	W1217 02:09:58.026254    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.026254    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.026303    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.091999    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.091999    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.131520    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.131520    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.226831    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.226831    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:58.226831    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:58.256592    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.256635    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:00.809919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:00.842222    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:00.872955    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.872955    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:00.876666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:00.906031    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.906031    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:00.909593    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:00.939873    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.939946    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:00.943346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:00.972609    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.972643    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:00.975886    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:01.005269    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.005269    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.009766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:01.041677    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.041677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.048361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:01.081235    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.081312    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.084849    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:01.113437    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.113437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.113437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.113437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.160067    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.160624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.225071    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.225071    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.265307    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.265307    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.348506    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.348535    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:01.348571    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:57.967423    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:03.891628    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:03.925404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:03.965688    6296 logs.go:282] 0 containers: []
	W1217 02:10:03.965688    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:03.968982    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:04.006348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.006348    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.009769    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:04.039968    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.039968    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.044404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:04.078472    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.078472    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.081894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:04.113348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.113348    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.117138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:04.148885    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.148885    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.152756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:04.181559    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.181616    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.185351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:04.217017    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.217017    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.217017    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.217017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.284540    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.284540    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.324402    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.324402    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.409943    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:04.409943    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:04.409943    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:04.438771    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.438771    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:06.997897    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.024185    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:07.054915    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.055512    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.060167    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:07.089778    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.089778    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.093773    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:07.124641    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.124641    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.128016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:07.154834    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.154915    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.158505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:07.188568    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.188568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.192962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:07.225078    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.225078    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.228699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:07.258599    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.258659    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.262590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:07.291623    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.291623    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.291623    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:07.291623    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:07.322611    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.322611    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.374970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.374970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.438795    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.438795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:07.479442    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.479442    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.566162    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.072312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.096505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:10.125617    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.125617    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.129377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:10.157921    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.157921    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.161850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:10.191705    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.191705    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.196003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:10.224412    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.224482    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.229368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:10.258140    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.258140    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.261205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:10.292047    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.292047    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.296511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:10.325818    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.325818    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.329752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:10.359454    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.359530    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.359530    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.359530    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:10.413970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.413970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.476665    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.476665    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.516335    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.516335    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.602353    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.602353    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:10.602353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:10:08.007712    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:13.134148    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:13.159720    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:13.191534    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.191534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.195626    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:13.230035    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.230035    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.233817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:13.266476    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.266476    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.270598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:13.305852    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.305852    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.310349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:13.341805    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.341867    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.345346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:13.377945    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.377945    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.381659    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:13.411885    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.411957    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.416039    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:13.446642    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.446642    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.446642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.446642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.487083    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.487083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.574632    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.574632    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:13.574632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.604181    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.604702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:13.660020    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.660020    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.225038    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:16.248922    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:16.280247    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.280247    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:16.284285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:16.312596    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.312596    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:16.316952    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:16.345108    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.345108    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:16.348083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:16.377403    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.377403    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.380619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:16.410555    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.410555    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.414048    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:16.446454    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.446454    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.449405    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:16.478967    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.478967    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.484108    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:16.516422    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.516422    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.516422    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.516422    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.580305    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.580305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.618663    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.618663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.705105    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.705105    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:16.705105    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:16.732046    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.732046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:19.284431    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:19.307909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:19.340842    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.340842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:19.344830    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:19.371150    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.371150    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:19.374863    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:19.403216    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.403216    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:19.406907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:19.433979    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.433979    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:19.438046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:19.469636    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.469636    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:19.473675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:19.504296    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.504296    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.508671    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:19.535932    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.535932    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.539707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:19.567355    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.567416    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.567416    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.567416    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.629876    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.629876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.678547    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.678547    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.785306    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.785306    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:19.785371    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:19.813137    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.813137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.369643    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:10:18.049946    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:22.396731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:22.431018    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.431018    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:22.434688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:22.463307    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.463307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:22.467323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:22.497065    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.497065    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:22.500574    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:22.531497    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.531564    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:22.535088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:22.563706    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.563779    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:22.567344    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:22.602516    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.602597    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:22.606242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:22.637637    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.637699    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:22.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:22.668078    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.668078    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.668078    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.668078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.754963    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.754963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:22.754963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:22.783172    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.783222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.840048    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.840048    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.900137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.900137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.445900    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:25.472646    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:25.502929    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.502929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:25.506274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:25.537721    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.537721    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:25.543044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:25.572924    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.572924    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:25.576391    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:25.607737    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.607798    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:25.611457    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:25.644967    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.645041    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:25.648690    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:25.677801    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.677801    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:25.681530    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:25.709148    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.709148    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:25.715667    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:25.746892    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.746892    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:25.746892    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:25.746892    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:25.796336    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:25.796336    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.862353    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.862353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.902100    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.902100    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.988926    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.988926    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:25.988926    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:28.523475    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:28.549366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:28.580055    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.580055    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:28.583822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:28.615168    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.615168    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:28.618724    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:28.650344    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.650368    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:28.654014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:28.704033    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.704033    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:28.707699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:28.738871    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.738938    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:28.743270    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:28.775432    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.775432    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:28.779176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:28.810234    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.810351    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:28.814357    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:28.845783    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.845783    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:28.845783    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:28.845783    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:28.902626    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:28.902626    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.963758    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.963758    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:29.002141    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:29.002141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:29.104674    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:29.104674    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:29.104674    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:31.640270    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:31.668862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:31.703099    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.703099    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:31.706355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:31.737408    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.737408    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:31.741549    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:31.771462    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.771549    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:31.775645    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:31.803600    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.803600    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:31.807313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:31.835884    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.835884    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:31.840000    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:31.870518    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.870518    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:31.877548    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:31.905387    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.905387    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:31.909722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:31.938258    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.938284    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:31.938284    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:31.938284    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:32.000115    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:32.000115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:32.039351    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:32.039351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:32.128849    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:32.128849    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:32.128849    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:32.155670    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:32.155670    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:28.083644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:34.707099    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:34.732689    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:34.763625    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.763625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:34.767349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:34.797435    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.797435    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:34.801415    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:34.828785    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.828785    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:34.832654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:34.864748    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.864748    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:34.868392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:34.896365    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.896365    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:34.900474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:34.932681    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.932681    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:34.936571    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:34.966056    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.966056    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:34.969208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:34.998362    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.998362    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:34.998362    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:34.998362    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:35.036977    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:35.036977    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:35.134841    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:35.134841    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:35.134841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:35.162429    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:35.162429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:35.213960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:35.214015    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:37.779857    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:37.806799    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:37.840730    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.840730    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:37.846443    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:37.875504    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.875504    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:37.879215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:37.910068    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.910068    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:37.913551    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:37.942897    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.942897    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:37.946741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:37.978321    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.978321    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:37.982267    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:38.008421    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.008421    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:38.013043    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:38.043041    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.043041    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:38.049737    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:38.082117    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.082117    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:38.082117    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:38.082117    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:38.148970    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:38.148970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:38.189697    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:38.189697    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:38.276122    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:38.276122    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:38.276122    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:38.304355    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:38.304355    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:40.862712    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:40.889041    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:40.921169    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.921169    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:40.924297    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:40.956313    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.956356    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:40.960294    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:40.990144    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.990144    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:40.993876    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:41.026732    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.026803    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:41.030745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:41.073825    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.073825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:41.078152    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:41.105859    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.105859    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:41.111714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:41.143286    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.143324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:41.146776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:41.176314    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.176345    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:41.176345    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:41.176345    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:41.213266    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:41.213266    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:41.300305    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:41.300305    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:41.300305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:41.328560    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:41.328621    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:41.375953    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:41.375953    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:38.119927    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:43.941613    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:43.967455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:44.000199    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.000199    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:44.003568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:44.035058    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.035058    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:44.040590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:44.083687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.083687    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:44.087476    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:44.115776    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.115776    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:44.119318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:44.155471    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.155513    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:44.159433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:44.191599    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.191636    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:44.195145    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:44.228181    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.228211    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:44.231971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:44.259687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.259763    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:44.259763    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:44.259763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:44.323705    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:44.323705    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:44.365401    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:44.365401    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:44.453893    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:44.453893    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:44.453893    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:44.480694    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:44.480694    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.042501    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:47.067663    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:47.108433    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.108433    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:47.112206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:47.144336    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.144336    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:47.148449    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:47.182968    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.183049    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:47.186614    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:47.215738    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.215738    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:47.219595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:47.248444    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.248511    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:47.252434    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:47.280975    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.280975    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:47.284966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:47.317178    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.317178    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:47.321223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:47.352638    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.352638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:47.352638    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:47.352638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:47.390049    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:47.390049    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:47.479425    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:47.479425    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:47.479425    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:47.505331    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:47.505331    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.556431    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:47.556431    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.124255    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:50.151100    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:50.184499    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.184565    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:50.187696    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:50.221764    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.221764    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:50.225471    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:50.253823    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.253823    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:50.260470    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:50.289768    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.289815    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:50.295283    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:50.321597    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.321597    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:50.325774    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:50.356707    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.356707    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:50.360685    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:50.390099    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.390099    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:50.393971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:50.420950    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.420950    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:50.420950    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:50.420950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.484730    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:50.484730    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:50.523997    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:50.523997    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:50.618256    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:50.618256    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:50.618256    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:50.645077    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:50.645077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:48.158175    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:53.200622    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:53.223348    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:53.253589    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.253589    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:53.258688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:53.287647    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.287689    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:53.291555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:53.324358    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.324403    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:53.327650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:53.355417    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.355417    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:53.359780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:53.390012    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.390012    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:53.393536    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:53.420636    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.420672    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:53.424429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:53.453665    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.453744    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:53.456764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:53.486769    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.486836    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:53.486875    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:53.486875    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:53.552513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:53.552513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:53.593054    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:53.593054    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:53.683171    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:53.683207    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:53.683230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:53.712513    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:53.712513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:56.288600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:56.314380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:56.347447    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.347447    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:56.351158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:56.381779    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.381779    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:56.385232    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:56.423000    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.423000    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:56.427083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:56.456635    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.456635    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:56.460509    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:56.490868    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.490868    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:56.496594    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:56.523671    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.523671    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:56.527847    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:56.559992    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.559992    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:56.565352    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:56.591708    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.591708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:56.591708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:56.591708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:56.656572    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:56.656572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:56.696334    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:56.696334    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:56.788411    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:56.788411    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:56.788411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:56.815762    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:56.815762    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.370676    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:59.404615    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:59.440735    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.440735    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:59.446758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:59.475209    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.475209    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:59.479521    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:59.509465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.509465    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:59.513228    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:59.542409    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.542409    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:59.546008    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:59.575778    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.575778    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:59.579759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:59.613465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.613465    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:59.617266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:59.645245    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.645245    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:59.649170    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:59.680413    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.680449    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:59.680449    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:59.680449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:59.713987    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:59.713987    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.764930    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:59.764994    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:59.832077    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:59.832077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:59.870681    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:59.870681    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:59.953336    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:10:58.200115    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:11:02.457745    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:02.492666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:02.526665    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.526665    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:02.530862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:02.560353    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.560413    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:02.564099    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:02.595430    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.595430    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:02.599884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:02.629744    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.629744    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:02.633637    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:02.662623    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.662623    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:02.666817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:02.694696    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.694696    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:02.698194    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:02.727384    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.727442    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:02.731483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:02.766114    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.766114    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:02.766114    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:02.766114    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:02.830755    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:02.830755    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:02.870216    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:02.870216    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:02.958327    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.958327    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:02.958380    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:02.984980    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:02.984980    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.540158    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:05.564812    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:05.595638    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.595638    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:05.599748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:05.628748    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.628748    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:05.632878    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:05.666232    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.666257    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:05.670293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:05.699654    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.699654    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:05.703004    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:05.733113    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.733113    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:05.737096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:05.765591    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.765639    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:05.770398    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:05.796360    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.796360    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:05.800240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:05.829847    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.829914    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:05.829914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:05.829945    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.880789    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:05.880789    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:05.943002    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:05.943002    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:05.983389    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:05.983389    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.076023    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.076023    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:06.076023    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:08.608606    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:08.632215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:08.665017    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.665017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:08.669299    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:08.695355    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.695355    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:08.699306    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:08.729054    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.729054    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:08.732454    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:08.759881    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.759881    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:08.764328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:08.793695    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.793777    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:08.797908    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:08.826225    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.826225    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:08.829679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:08.859645    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.859645    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:08.863083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:08.893657    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.893657    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:08.893657    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:08.893657    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:08.958163    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:08.958163    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:08.997418    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:08.997418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.087973    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.087973    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:09.087973    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:09.115687    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.115687    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:11.697770    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.725676    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:11.758809    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.758809    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:11.762929    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:11.794198    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.794198    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:11.798023    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:11.828890    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.828890    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:11.833358    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:11.865217    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.865217    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:11.868915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:11.897672    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.897672    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:11.901235    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:11.931725    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.931808    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:11.935264    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:11.966263    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.966263    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:11.970422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:11.999856    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.999856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:11.999856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:11.999856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.064137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.064137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.102491    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.102491    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.183568    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.183568    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:12.183568    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:12.212178    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.212178    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:11:08.241744    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:16.871278    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 02:11:16.871278    6768 node_ready.go:38] duration metric: took 6m0.0008728s for node "no-preload-184000" to be "Ready" ...
	I1217 02:11:16.874572    6768 out.go:203] 
	W1217 02:11:16.876457    6768 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:11:16.876457    6768 out.go:285] * 
	W1217 02:11:16.879042    6768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:11:16.881673    6768 out.go:203] 
	I1217 02:11:14.772821    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.797656    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:14.826900    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.826900    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.829894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:14.859202    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.859202    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.862783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:14.891414    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.891414    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.895052    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:14.925404    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.925404    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:14.928966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:14.959295    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.959330    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:14.962893    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:14.991696    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.991730    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:14.994776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:15.025468    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.025468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.031674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:15.060661    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.060661    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.060733    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.060733    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.120513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.120513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.159608    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.159608    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.244418    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.244418    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:15.244418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:15.271288    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.271288    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.830556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.850600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:17.886696    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.886696    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.890674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:17.921702    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.921702    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.924697    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:17.952692    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.952692    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.956701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:17.984691    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.984691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.988655    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:18.024626    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.024663    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:18.028558    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:18.060310    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.060310    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.064024    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:18.100124    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.100124    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.104105    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:18.141223    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.141223    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.141223    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.141223    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.179686    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.179686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.311240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.311240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:18.311240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:18.342566    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.342615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.393872    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.393872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.977693    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:21.006733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:21.035136    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.035201    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:21.039202    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:21.069636    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.069636    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:21.075448    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:21.105437    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.105437    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:21.108735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:21.136602    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.136602    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:21.140124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:21.168674    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.168674    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:21.172368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:21.204723    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.204723    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:21.208123    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:21.237130    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.237130    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:21.240654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:21.268170    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.268170    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:21.268170    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:21.268170    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:21.333642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:21.333642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:21.372230    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:21.372230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:21.467012    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:21.467012    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:21.467012    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:21.495867    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:21.495867    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.053568    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:24.079587    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:24.110362    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.110399    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:24.113326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:24.141818    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.141818    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:24.145313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:24.172031    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.172031    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:24.176197    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:24.205114    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.205133    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:24.208437    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:24.238244    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.238244    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:24.242692    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:24.271687    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.271687    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:24.276384    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:24.307922    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.307922    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:24.311538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:24.350108    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.350108    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:24.350108    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:24.350108    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.402159    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:24.402224    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:24.463824    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:24.463824    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:24.503645    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:24.503645    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:24.591969    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:24.591969    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:24.591969    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.123965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:27.157839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:27.199991    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.199991    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:27.204206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:27.231981    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.231981    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:27.235568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:27.265668    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.265668    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:27.269162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:27.299488    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.299488    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:27.303277    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:27.335769    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.335769    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:27.339516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:27.369112    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.369112    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:27.372881    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:27.402031    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.402031    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:27.405780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:27.436610    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.436610    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:27.436610    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:27.436610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:27.523394    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:27.523917    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:27.523957    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.552476    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:27.552476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:27.607026    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:27.607078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:27.670834    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:27.670834    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.216027    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:30.241711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:30.272275    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.272275    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:30.276071    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:30.304635    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.304635    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:30.307639    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:30.340374    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.340374    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:30.343758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:30.374162    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.374162    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:30.378010    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:30.407836    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.407836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:30.411411    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:30.440002    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.440002    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:30.443429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:30.472647    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.472647    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:30.476538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:30.510744    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.510744    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:30.510744    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:30.510744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:30.575069    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:30.575156    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:30.639732    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:30.640731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.685195    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:30.685195    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:30.775246    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:30.775295    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:30.775295    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.308109    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:33.334329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:33.365061    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.365061    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:33.370854    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:33.399488    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.399488    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:33.406335    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:33.436434    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.436434    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:33.439783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:33.468947    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.468947    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:33.474014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:33.502568    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.502568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:33.506146    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:33.535706    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.535706    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:33.540016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:33.573811    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.573811    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:33.577712    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:33.606321    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.606321    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:33.606321    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:33.606321    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:33.671884    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:33.671884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:33.712095    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:33.712095    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:33.800767    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:33.800848    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:33.800884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.829402    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:33.829474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:36.410236    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:36.438912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:36.468229    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.468229    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:36.472231    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:36.501220    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.501220    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:36.506462    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:36.539556    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.539556    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:36.543603    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:36.584367    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.584367    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:36.588513    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:36.620670    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.620670    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:36.626030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:36.654239    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.654239    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:36.658962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:36.689023    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.689023    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:36.693754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:36.721351    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.721351    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:36.721351    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:36.721351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:36.787832    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:36.787832    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:36.828019    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:36.828019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:36.916923    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:36.916923    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:36.916923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:36.946231    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:36.946265    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.498459    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:39.522909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:39.553462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.553462    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:39.557524    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:39.585462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.585462    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:39.591342    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:39.619332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.619399    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:39.623096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:39.651071    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.651071    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:39.654766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:39.683502    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.683502    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:39.687390    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:39.715332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.715332    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:39.718932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:39.749019    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.749019    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:39.752739    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:39.783378    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.783378    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:39.783378    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:39.783378    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.835019    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:39.835019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:39.899542    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:39.899542    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:39.938717    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:39.938717    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:40.026359    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:40.026403    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:40.026446    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:42.561805    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:42.585507    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:42.613091    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.613091    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:42.616991    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:42.647608    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.647608    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:42.651380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:42.680540    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.680540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:42.683625    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:42.717014    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.717014    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:42.721369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:42.750017    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.750017    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:42.753961    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:42.785164    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.785164    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:42.788883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:42.817424    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.817424    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:42.821266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:42.853247    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.853247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:42.853247    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:42.853247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:42.910034    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:42.910052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:42.970436    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:42.970436    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:43.009833    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:43.010830    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:43.102803    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:43.102803    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:43.102803    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.636418    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:45.661677    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:45.695141    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.695141    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:45.699189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:45.729376    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.729376    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:45.733753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:45.764365    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.764365    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:45.767917    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:45.799287    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.799287    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:45.802968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:45.835270    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.835270    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:45.838766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:45.868660    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.868660    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:45.875727    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:45.903566    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.903566    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:45.907562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:45.937452    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.937452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:45.937452    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:45.937452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.965091    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:45.965091    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:46.013173    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:46.013173    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:46.077113    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:46.077113    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:46.118527    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:46.118527    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:46.207662    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:48.714055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:48.741412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:48.772767    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.772767    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:48.776092    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:48.804946    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.805020    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:48.808538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:48.837488    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.837488    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:48.840453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:48.871139    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.871139    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:48.875518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:48.904264    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.904264    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:48.911351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:48.939118    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.939118    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:48.943340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:48.970934    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.970934    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:48.974990    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:49.005140    6296 logs.go:282] 0 containers: []
	W1217 02:11:49.005174    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:49.005205    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:49.005234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:49.075925    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:49.075925    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:49.116144    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:49.116144    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:49.196968    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:49.197074    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:49.197074    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:49.222883    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:49.223404    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:51.783312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:51.809151    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:51.839751    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.839751    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:51.844016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:51.895178    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.895178    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:51.899341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:51.930311    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.930311    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:51.933797    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:51.961857    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.961857    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:51.966036    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:51.993647    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.993647    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:51.997672    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:52.026485    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.026485    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:52.032726    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:52.062039    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.062039    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:52.066379    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:52.096772    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.096772    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:52.096772    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:52.096772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:52.163369    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:52.163369    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:52.203719    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:52.203719    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:52.295324    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:52.295324    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:52.295324    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:52.323234    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:52.323234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:54.878824    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:54.907441    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:54.944864    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.944864    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:54.948030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:54.980769    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.980769    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:54.987506    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:55.019726    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.019726    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:55.024226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:55.052618    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.052618    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:55.056658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:55.085528    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.085607    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:55.089212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:55.120453    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.120525    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:55.124591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:55.154725    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.154749    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:55.157707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:55.187692    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.187692    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:55.187692    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:55.187692    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:55.252848    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:55.252848    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:55.318197    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:55.318197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:55.358145    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:55.358145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:55.439213    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:55.439213    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:55.439744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:57.972346    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:57.997412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:58.029794    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.029794    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:58.033582    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:58.064729    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.064729    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:58.068722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:58.103854    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.103854    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:58.107069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:58.140767    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.140767    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:58.145080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:58.172792    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.172792    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:58.177038    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:58.205809    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.205809    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:58.209371    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:58.236353    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.236353    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:58.240621    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:58.269469    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.269469    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:58.269469    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:58.269469    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:58.324960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:58.324960    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:58.384708    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:58.384708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:58.423476    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:58.423476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:58.512328    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:58.512387    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:58.512387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.044354    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:01.073699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:01.104765    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.104836    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:01.107915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:01.141131    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.141131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:01.145209    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:01.174536    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.174536    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:01.178187    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:01.209172    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.209172    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:01.212803    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:01.241435    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.241486    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:01.245545    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:01.277115    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.277115    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:01.281366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:01.312158    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.312158    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:01.316725    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:01.343220    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.343220    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:01.343220    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:01.343220    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:01.382233    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:01.382233    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:01.487570    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:01.488578    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:01.488578    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.514572    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:01.514572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:01.567754    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:01.567754    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.140604    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:04.165376    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:04.197379    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.197379    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:04.202896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:04.231436    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.231506    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:04.235354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:04.267960    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.267960    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:04.271789    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:04.301108    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.301108    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:04.305219    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:04.334515    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.334515    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:04.338693    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:04.366071    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.366071    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:04.369958    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:04.398457    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.398457    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:04.405087    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:04.432495    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.432495    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:04.432495    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:04.432495    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.492454    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:04.492454    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:04.530878    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:04.530878    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:04.615739    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:04.615739    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:04.615739    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:04.643270    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:04.643304    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.195429    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:07.221998    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:07.254842    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.254842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:07.258578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:07.291820    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.291820    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:07.297979    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:07.329603    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.329603    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:07.334181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:07.363276    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.363324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:07.367248    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:07.394630    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.394695    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:07.398679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:07.425998    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.425998    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:07.429814    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:07.458824    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.458878    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:07.462682    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:07.490543    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.490614    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:07.490614    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:07.490614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:07.575806    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:07.575806    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:07.576816    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:07.607910    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:07.607910    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.659155    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:07.659155    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:07.722240    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:07.722240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.270711    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:10.295753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:10.324920    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.324920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:10.328903    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:10.358180    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.358218    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:10.362249    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:10.390135    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.390135    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:10.393738    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:10.423058    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.423090    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:10.426534    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:10.456745    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.456745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:10.463439    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:10.493765    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.493765    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:10.497858    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:10.526425    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.526425    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:10.532217    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:10.563338    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.563338    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:10.563338    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:10.563338    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:10.627669    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:10.627669    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.666455    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:10.666455    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:10.755613    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:10.755613    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:10.755613    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:10.786516    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:10.787045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:13.342631    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:13.368870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:13.402304    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.402347    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:13.408012    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:13.436633    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.436710    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:13.439877    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:13.468754    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.469007    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:13.473752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:13.505247    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.505324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:13.509766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:13.538745    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.538745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:13.542743    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:13.571986    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.571986    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:13.575522    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:13.604002    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.604002    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:13.608063    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:13.636028    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.636028    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:13.636028    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:13.636028    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:13.701418    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:13.701418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:13.740729    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:13.740729    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:13.830687    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:13.830746    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:13.830768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:13.856732    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:13.856732    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.415071    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:16.441827    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:16.474920    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.474920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:16.478560    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:16.509149    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.509149    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:16.512927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:16.544114    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.544114    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:16.547867    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:16.578111    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.578111    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:16.581776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:16.610586    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.610586    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:16.614807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:16.644103    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.644103    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:16.647954    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:16.692289    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.692289    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:16.696153    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:16.727229    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.727229    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:16.727229    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:16.727229    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:16.823236    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:16.823236    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:16.823236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:16.849827    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:16.849827    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.905388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:16.905414    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:16.965153    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:16.965153    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.511192    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:19.537347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:19.568920    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.568920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:19.573318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:19.604587    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.604587    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:19.608244    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:19.637707    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.637732    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:19.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:19.669047    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.669047    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:19.672932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:19.703243    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.703243    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:19.706862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:19.738948    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.738948    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:19.742483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:19.773620    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.773620    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:19.777766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:19.807218    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.807218    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:19.807218    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:19.807218    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:19.872750    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:19.872750    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.912835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:19.912835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:19.997398    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:19.997398    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:19.997398    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:20.025629    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:20.025629    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:22.593289    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:22.619754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:22.652929    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.652929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:22.657635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:22.689768    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.689846    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:22.693504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:22.720087    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.720087    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:22.723840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:22.752902    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.752959    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:22.757109    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:22.787369    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.787369    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:22.791584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:22.822117    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.822117    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:22.825675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:22.856022    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.856022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:22.859609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:22.886982    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.886982    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:22.886982    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:22.886982    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:22.972988    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:22.972988    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:22.972988    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:23.002037    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:23.002037    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:23.061548    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:23.061548    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:23.124352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:23.124352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:25.670974    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:25.706279    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:25.741150    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.741150    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:25.745079    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:25.773721    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.773782    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:25.779777    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:25.808516    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.808516    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:25.813011    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:25.844755    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.844755    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:25.848591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:25.877332    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.877332    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:25.881053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:25.907973    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.907973    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:25.914424    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:25.941138    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.941138    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:25.945025    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:25.974760    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.974760    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:25.974760    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:25.974760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:26.012354    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:26.012354    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:26.113177    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:26.113177    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:26.113177    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:26.144162    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:26.144245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:26.194605    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:26.195138    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:28.763811    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:28.789762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:28.820544    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.820544    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:28.824807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:28.855728    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.855728    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:28.860354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:28.894655    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.894655    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:28.898069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:28.928310    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.928394    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:28.932124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:28.967209    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.967209    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:28.973126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:29.002975    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.003024    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:29.006839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:29.044805    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.044881    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:29.049158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:29.078108    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.078142    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:29.078174    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:29.078202    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:29.142751    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:29.142751    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:29.182082    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:29.182082    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:29.271566    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:29.271596    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:29.271643    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:29.299332    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:29.299332    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:31.856743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:31.882741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:31.912323    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.912372    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:31.917046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:31.948587    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.948631    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:31.952095    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:31.981682    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.981682    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:31.985888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:32.022173    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.022173    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:32.026061    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:32.070026    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.070026    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:32.074016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:32.105255    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.105255    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:32.109062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:32.140873    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.140947    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:32.143941    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:32.172848    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.172876    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:32.172876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:32.172876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:32.237207    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:32.237207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:32.275838    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:32.275838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:32.360656    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:32.360656    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:32.360656    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:32.391099    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:32.391099    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:34.970955    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:35.002200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:35.036658    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.036658    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:35.041208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:35.068998    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.068998    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:35.075758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:35.105253    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.105253    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:35.109356    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:35.137411    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.137411    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:35.141289    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:35.168542    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.168542    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:35.174717    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:35.204677    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.204677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:35.209675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:35.240901    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.240901    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:35.244034    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:35.276453    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.276453    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:35.276453    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:35.276453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:35.341158    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:35.341158    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:35.381822    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:35.381822    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:35.472890    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:35.472890    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:35.472890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:35.501374    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:35.501374    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.054644    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:38.080787    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:38.112397    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.112420    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:38.116070    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:38.144341    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.144396    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:38.148080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:38.177159    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.177159    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:38.181253    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:38.210000    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.210000    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:38.215709    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:38.243526    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.243526    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:38.247620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:38.278443    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.278443    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:38.282504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:38.314414    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.314414    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:38.317968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:38.345306    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.345306    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:38.345306    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:38.345412    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:38.425240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:38.425240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:38.425240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:38.455129    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:38.455129    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.514775    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:38.514775    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:38.574255    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:38.574255    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.116537    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:41.139650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:41.169726    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.169814    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:41.173285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:41.204812    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.204812    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:41.208892    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:41.235980    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.235980    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:41.240200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:41.271415    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.271415    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:41.275005    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:41.303967    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.303967    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:41.309707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:41.340401    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.340401    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:41.343688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:41.374008    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.374008    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:41.377325    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:41.409502    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.409563    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:41.409563    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:41.409610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:41.472168    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:41.472168    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.513098    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:41.513098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:41.601716    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:41.601716    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:41.601716    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:41.629092    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:41.629148    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:44.185012    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:44.210566    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:44.242274    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.242274    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:44.248762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:44.280241    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.280307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:44.283818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:44.312929    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.312997    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:44.316643    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:44.343840    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.343840    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:44.347619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:44.378547    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.378547    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:44.382595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:44.410908    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.410908    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:44.414686    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:44.448329    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.448329    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:44.453888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:44.484842    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.484842    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:44.484842    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:44.484842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:44.550740    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:44.550740    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:44.589666    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:44.589666    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:44.677625    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:44.677625    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:44.677625    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:44.706051    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:44.706051    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:47.257477    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:47.286845    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:47.315563    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.315563    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:47.319220    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:47.351319    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.351319    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:47.354946    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:47.382237    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.382237    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:47.386106    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:47.415608    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.415608    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:47.419575    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:47.449212    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.449241    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:47.452978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:47.482356    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.482356    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:47.486511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:47.518156    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.518205    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:47.522254    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:47.550631    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.550631    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:47.550631    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:47.550727    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:47.615950    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:47.615950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:47.655928    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:47.655928    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:47.744126    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:47.744164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:47.744210    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:47.773502    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:47.773502    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.331328    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:50.368555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:50.407443    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.407443    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:50.411798    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:50.440520    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.440544    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:50.444430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:50.478050    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.478050    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:50.481848    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:50.513603    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.513658    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:50.517565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:50.551935    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.552946    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:50.556641    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:50.591171    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.591171    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:50.594981    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:50.624821    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.624821    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:50.628756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:50.661209    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.661209    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:50.661209    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:50.661209    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:50.693141    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:50.693141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.746322    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:50.746322    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:50.805974    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:50.805974    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:50.844572    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:50.844572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:50.935133    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.441690    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:53.466017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:53.494846    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.494846    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:53.499634    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:53.530839    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.530839    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:53.534661    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:53.567189    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.567189    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:53.571412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:53.598763    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.598763    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:53.602673    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:53.629791    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.629791    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:53.632953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:53.662323    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.662323    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:53.665394    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:53.695745    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.695745    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:53.701403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:53.735348    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.735348    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:53.735348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:53.735348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:53.816532    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.816532    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:53.816532    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:53.843453    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:53.843453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:53.893853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:53.893853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:53.954759    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:53.954759    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.499506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:56.525316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:56.561689    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.561738    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:56.565616    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:56.594009    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.594009    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:56.599822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:56.624101    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.624101    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:56.628604    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:56.657977    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.658063    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:56.663240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:56.694316    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.694316    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:56.698763    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:56.728527    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.728527    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:56.734446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:56.765315    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.765315    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:56.769182    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:56.796198    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.796198    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:56.796198    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:56.796198    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:56.864777    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:56.864777    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.904264    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:56.904264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:57.000434    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:57.000434    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:57.000434    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:57.034757    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:57.034842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:59.601768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:59.627731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:59.657009    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.657009    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:59.660962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:59.690428    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.690428    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:59.694181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:59.723517    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.723592    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:59.727191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:59.756251    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.756251    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:59.759627    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:59.791516    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.791516    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:59.795602    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:59.828192    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.828192    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:59.832003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:59.860258    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.860258    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:59.863635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:59.893207    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.893207    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:59.893207    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:59.893207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:59.958927    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:59.958927    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:00.004703    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:00.004703    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:00.096612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:00.096612    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:00.096612    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:00.124914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:00.124975    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:02.682962    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:02.708543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:02.737663    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.737663    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:02.741817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:02.772482    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.772482    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:02.778562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:02.806978    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.806978    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:02.813021    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:02.845688    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.845688    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:02.851578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:02.880144    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.880200    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:02.883811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:02.918466    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.918544    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:02.922186    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:02.951702    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.951702    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:02.955491    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:02.984638    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.984638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:02.984638    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:02.984638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:03.047941    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:03.047941    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:03.086964    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:03.086964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:03.173007    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:03.173086    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:03.173086    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:03.202017    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:03.202544    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:05.761010    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:05.786319    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:05.819785    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.819785    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:05.825532    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:05.853318    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.853318    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:05.858274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:05.887613    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.887613    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:05.891162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:05.919471    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.919471    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:05.922933    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:05.955441    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.955441    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:05.959241    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:05.984925    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.984925    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:05.989009    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:06.021101    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.021101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:06.024383    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:06.055098    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.055098    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:06.055098    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:06.055098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:06.107743    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:06.107743    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:06.170319    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:06.170319    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:06.210360    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:06.210360    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:06.299194    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:06.299194    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:06.299194    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:08.832901    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:08.860263    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:08.890111    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.890111    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:08.893617    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:08.921989    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.921989    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:08.925561    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:08.952883    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.952883    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:08.959516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:08.991347    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.991347    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:08.995066    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:09.028011    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.028011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:09.032096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:09.060803    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.060803    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:09.064596    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:09.093542    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.093572    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:09.096987    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:09.123594    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.123615    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:09.123615    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:09.123615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:09.176222    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:09.176222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:09.238935    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:09.238935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:09.278804    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:09.278804    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:09.367283    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:09.367283    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:09.367283    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:11.901781    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:11.930493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:11.963534    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.963534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:11.967747    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:11.997700    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.997700    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:12.001601    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:12.031862    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.031862    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:12.035544    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:12.066506    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.066506    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:12.071472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:12.103184    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.103184    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:12.107033    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:12.135713    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.135713    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:12.139268    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:12.170350    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.170350    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:12.174053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:12.202964    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.202964    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:12.202964    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:12.202964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:12.252669    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:12.253197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:12.318088    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:12.318088    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:12.356768    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:12.356768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:12.443857    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:12.443857    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:12.443857    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:14.980350    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:15.007303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:15.040020    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.040100    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:15.043303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:15.073502    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.073502    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:15.077944    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:15.106871    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.106871    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:15.110453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:15.138071    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.138095    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:15.141547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:15.171602    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.171659    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:15.175341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:15.207140    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.207181    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:15.210547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:15.243222    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.243222    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:15.247103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:15.280156    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.280232    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:15.280232    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:15.280232    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:15.342862    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:15.342862    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:15.384022    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:15.384022    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:15.469724    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:15.469766    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:15.469807    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:15.497606    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:15.497667    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:18.064895    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:18.090410    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:18.123378    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.123429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:18.127331    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:18.157210    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.157210    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:18.160924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:18.191242    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.191242    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:18.195064    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:18.222561    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.222561    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:18.226125    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:18.255891    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.255891    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:18.259860    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:18.288868    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.288868    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:18.292834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:18.322668    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.322668    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:18.325666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:18.353052    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.353052    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:18.353052    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:18.353052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:18.418504    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:18.418504    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:18.457348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:18.457348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:18.568946    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:18.569003    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:18.569003    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:18.602236    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:18.602236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:21.158752    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:21.184475    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:21.214582    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.214582    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:21.218375    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:21.245604    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.245604    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:21.249850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:21.281360    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.281360    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:21.286501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:21.318549    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.318601    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:21.322609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:21.353429    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.353460    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:21.357031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:21.391028    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.391028    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:21.394206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:21.423584    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.423584    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:21.427599    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:21.458683    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.458683    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:21.458683    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:21.458683    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:21.526430    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:21.526430    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:21.565490    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:21.565490    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:21.656323    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:21.656323    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:21.656323    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:21.689700    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:21.689700    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:24.246630    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:24.280925    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:24.322972    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.322972    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:24.326768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:24.355732    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.355732    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:24.359957    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:24.391937    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.392009    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:24.395559    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:24.427388    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.427388    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:24.431126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:24.459891    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.459966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:24.463468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:24.491009    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.491009    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:24.494465    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:24.524468    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.524468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:24.528017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:24.568815    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.568815    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:24.568815    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:24.568815    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:24.632772    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:24.632772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:24.671731    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:24.671731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:24.755604    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:24.755604    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:24.755604    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:24.784599    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:24.784660    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:27.338272    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:27.366367    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:27.395715    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.395715    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:27.399158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:27.427362    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.427362    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:27.430752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:27.461990    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.461990    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:27.465748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:27.492985    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.492985    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:27.497176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:27.528724    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.528724    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:27.532970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:27.571655    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.571655    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:27.575466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:27.604007    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.604068    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:27.608062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:27.639624    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.639689    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:27.639735    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:27.639735    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:27.705896    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:27.705896    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:27.745294    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:27.745294    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:27.827462    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:27.827462    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:27.827462    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:27.854463    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:27.854559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:30.412283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:30.438474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:30.469848    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.469848    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:30.473330    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:30.501713    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.501713    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:30.505748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:30.535870    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.535870    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:30.540177    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:30.572310    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.572310    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:30.576644    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:30.607087    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.607087    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:30.610334    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:30.640168    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.640168    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:30.643628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:30.671132    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.671132    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:30.677927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:30.708536    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.708536    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:30.708536    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:30.708536    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:30.773222    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:30.773222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:30.812763    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:30.812763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:30.932347    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:30.932397    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:30.932444    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:30.961663    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:30.961663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.524404    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:33.548624    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:33.580753    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.580845    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:33.583912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:33.613001    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.613048    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:33.616808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:33.645262    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.645262    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:33.649044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:33.677477    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.677562    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:33.681205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:33.710607    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.710669    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:33.714600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:33.742889    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.742889    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:33.746623    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:33.777022    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.777022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:33.780455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:33.809525    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.809525    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:33.809525    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:33.809525    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.860852    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:33.860936    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:33.924768    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:33.924768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:33.962632    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:33.962632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:34.054124    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:34.054124    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:34.054124    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.589465    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:36.617658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:36.652432    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.652432    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:36.656189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:36.694709    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.694709    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:36.700040    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:36.729913    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.729913    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:36.733870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:36.762591    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.762591    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:36.766493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:36.796414    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.796414    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:36.800540    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:36.828148    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.828148    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:36.833323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:36.862390    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.862390    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:36.866173    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:36.895727    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.895814    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:36.895814    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:36.895814    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.926240    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:36.926240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:36.975760    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:36.975760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:37.036350    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:37.036350    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:37.072745    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:37.072745    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:37.161612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:39.667288    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:39.691212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:39.724148    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.724148    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:39.727935    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:39.761821    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.761821    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:39.765852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:39.793659    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.793696    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:39.797422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:39.825439    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.825473    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:39.828751    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:39.859011    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.859011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:39.862518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:39.891552    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.891613    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:39.894978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:39.926857    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.926857    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:39.930363    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:39.975835    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.975835    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:39.975835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:39.975835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:40.070107    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:40.070107    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:40.070107    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:40.098563    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:40.098605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:40.147476    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:40.147476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:40.212702    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:40.212702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:42.757339    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:42.786178    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:42.817429    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.817429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:42.821164    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:42.850363    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.850415    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:42.854031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:42.881774    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.881774    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:42.885802    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:42.915556    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.915556    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:42.919184    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:42.948329    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.948329    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:42.952430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:42.982355    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.982355    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:42.986768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:43.017700    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.017700    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:43.021284    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:43.052749    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.052779    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:43.052779    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:43.052813    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:43.091605    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:43.091605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:43.175861    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:43.175861    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:43.175861    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:43.204569    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:43.204569    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:43.257132    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:43.257132    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:45.825092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:45.853653    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:45.886780    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.886780    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:45.890416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:45.921840    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.923184    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:45.928382    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:45.960187    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.960252    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:45.963959    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:45.993658    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.993712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:45.997113    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:46.024308    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.024308    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:46.027994    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:46.060725    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.060725    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:46.064446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:46.092825    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.092825    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:46.098256    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:46.129614    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.129688    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:46.129688    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:46.129688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:46.216242    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:46.216263    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:46.216263    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:46.248767    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:46.248767    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:46.298044    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:46.298044    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:46.363186    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:46.363186    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:48.911992    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:48.946588    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:48.983880    6296 logs.go:282] 0 containers: []
	W1217 02:13:48.983880    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:48.987999    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:49.017254    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.017254    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:49.021239    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:49.053619    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.053619    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:49.057711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:49.086289    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.086289    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:49.090230    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:49.123069    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.123069    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:49.130107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:49.158724    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.158724    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:49.162733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:49.193515    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.193573    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:49.197116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:49.230153    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.230201    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:49.230245    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:49.230245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:49.259747    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:49.259747    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:49.312360    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:49.312456    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:49.375035    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:49.375035    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:49.413908    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:49.413908    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:49.508187    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:52.012834    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:52.037104    6296 out.go:203] 
	W1217 02:13:52.039462    6296 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:13:52.039520    6296 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:13:52.039588    6296 out.go:285] * Related issues:
	W1217 02:13:52.039588    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:13:52.039635    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:13:52.041923    6296 out.go:203] 
	
	
	==> Docker <==
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700732008Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700826718Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700839319Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700844420Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700849520Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700872823Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700996336Z" level=info msg="Initializing buildkit"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.801833124Z" level=info msg="Completed buildkit initialization"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807448530Z" level=info msg="Daemon has completed initialization"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807644551Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807743662Z" level=info msg="API listen on [::]:2376"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807662953Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 02:07:46 newest-cni-383500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 02:07:47 newest-cni-383500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Loaded network plugin cni"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 02:07:47 newest-cni-383500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:14:05.163280   19797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:05.164003   19797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:05.166797   19797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:05.168044   19797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:05.169072   19797 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.752411] CPU: 12 PID: 469779 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8b9b910b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f8b9b910af6.
	[  +0.000001] RSP: 002b:00007fffc85e9670 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.875329] CPU: 10 PID: 469916 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fdfac8dab20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fdfac8daaf6.
	[  +0.000001] RSP: 002b:00007ffd587a0060 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:14:05 up  2:33,  0 user,  load average: 1.06, 0.94, 2.06
	Linux newest-cni-383500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:14:01 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:02 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1.
	Dec 17 02:14:02 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:02 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:02 newest-cni-383500 kubelet[19604]: E1217 02:14:02.552266   19604 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:02 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:02 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2.
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:03 newest-cni-383500 kubelet[19632]: E1217 02:14:03.306686   19632 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3.
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:03 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:04 newest-cni-383500 kubelet[19659]: E1217 02:14:04.060856   19659 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:04 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:04 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:04 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4.
	Dec 17 02:14:04 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:04 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:04 newest-cni-383500 kubelet[19686]: E1217 02:14:04.803759   19686 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:04 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:04 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (586.8242ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-383500" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect newest-cni-383500
helpers_test.go:244: (dbg) docker inspect newest-cni-383500:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638",
	        "Created": "2025-12-17T01:57:11.100405677Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 462672,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:07:38.479713902Z",
	            "FinishedAt": "2025-12-17T02:07:35.952064424Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hostname",
	        "HostsPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/hosts",
	        "LogPath": "/var/lib/docker/containers/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638/58edac260513626564270b0fcc3abd947b39f03b431960a5f860cbf36a25d638-json.log",
	        "Name": "/newest-cni-383500",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-383500:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "newest-cni-383500",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/merged",
	                "UpperDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/diff",
	                "WorkDir": "/var/lib/docker/overlay2/04b4ca20393c89142cf479fde17b69b346ad84b2fea34bdd93c5253e56d51752/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-383500",
	                "Source": "/var/lib/docker/volumes/newest-cni-383500/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-383500",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-383500",
	                "name.minikube.sigs.k8s.io": "newest-cni-383500",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1db633168a5c321973d71a3d7a937d0960662192a945d2448f4398b25b744030",
	            "SandboxKey": "/var/run/docker/netns/1db633168a5c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63782"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63783"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63784"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63785"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63786"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-383500": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:4c:02",
	                    "DriverOpts": null,
	                    "NetworkID": "a0a3f566cb0e1e68eaf85fc99a3ee131940651a4c9a15e291bc077be33f07b4e",
	                    "EndpointID": "d5e1ca0ef443df8c9e41596f8db19fb0cd842fc42e6efd30a71aaa1d3fefb2d9",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-383500",
	                        "58edac260513"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (579.563ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p newest-cni-383500 logs -n 25: (1.650681s)
helpers_test.go:261: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ unpause │ -p old-k8s-version-044000 --alsologtostderr -v=1                                                                                                                                                                           │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ stop    │ -p newest-cni-383500 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-383500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │                     │
	│ image   │ newest-cni-383500 image list --format=json                                                                                                                                                                                 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ pause   │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ unpause │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:07:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:07:37.336708    6296 out.go:360] Setting OutFile to fd 968 ...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.380113    6296 out.go:374] Setting ErrFile to fd 1700...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.394455    6296 out.go:368] Setting JSON to false
	I1217 02:07:37.396490    6296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8845,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:07:37.397485    6296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:07:37.401853    6296 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:07:37.405009    6296 notify.go:221] Checking for updates...
	I1217 02:07:37.407761    6296 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:37.412054    6296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:07:37.415031    6296 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:07:37.416942    6296 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:07:37.418887    6296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 02:07:37.439676    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:37.422499    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:37.422499    6296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:07:37.541250    6296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:07:37.544536    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:37.790862    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:37.763465755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:37.793941    6296 out.go:179] * Using the docker driver based on existing profile
	I1217 02:07:37.795944    6296 start.go:309] selected driver: docker
	I1217 02:07:37.795944    6296 start.go:927] validating driver "docker" against &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:37.796941    6296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:07:37.881125    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:38.106129    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:38.085504737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:38.106129    6296 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:07:38.106129    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:38.106661    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:38.106789    6296 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:38.110370    6296 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 02:07:38.113499    6296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:07:38.115628    6296 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:07:38.118799    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:38.118867    6296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:07:38.118972    6296 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 02:07:38.119036    6296 cache.go:65] Caching tarball of preloaded images
	I1217 02:07:38.119094    6296 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 02:07:38.119094    6296 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 02:07:38.119094    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.197259    6296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:07:38.197259    6296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:07:38.197259    6296 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:07:38.197259    6296 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:07:38.197259    6296 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-383500"
	I1217 02:07:38.197259    6296 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:07:38.197259    6296 fix.go:54] fixHost starting: 
	I1217 02:07:38.204499    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.259240    6296 fix.go:112] recreateIfNeeded on newest-cni-383500: state=Stopped err=<nil>
	W1217 02:07:38.259240    6296 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:07:38.262335    6296 out.go:252] * Restarting existing docker container for "newest-cni-383500" ...
	I1217 02:07:38.265716    6296 cli_runner.go:164] Run: docker start newest-cni-383500
	I1217 02:07:38.804123    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.863188    6296 kic.go:430] container "newest-cni-383500" state is running.
	I1217 02:07:38.868900    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:38.924169    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.926083    6296 machine.go:94] provisionDockerMachine start ...
	I1217 02:07:38.928987    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:38.984001    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:38.984993    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:38.984993    6296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:07:38.986003    6296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:07:42.161557    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.161646    6296 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 02:07:42.166827    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.231443    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.231698    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.231698    6296 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 02:07:42.423907    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.432743    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.491085    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.491085    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.491085    6296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:07:42.667009    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:42.667009    6296 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:07:42.667009    6296 ubuntu.go:190] setting up certificates
	I1217 02:07:42.667009    6296 provision.go:84] configureAuth start
	I1217 02:07:42.671320    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:42.724474    6296 provision.go:143] copyHostCerts
	I1217 02:07:42.725072    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:07:42.725072    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:07:42.725072    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:07:42.726229    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:07:42.726229    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:07:42.726812    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:07:42.727386    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:07:42.727386    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:07:42.727386    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:07:42.728644    6296 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 02:07:42.882778    6296 provision.go:177] copyRemoteCerts
	I1217 02:07:42.886667    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:07:42.889412    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.946034    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:43.080244    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:07:43.111350    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:07:43.145228    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:07:43.176328    6296 provision.go:87] duration metric: took 509.312ms to configureAuth
	I1217 02:07:43.176328    6296 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:07:43.176328    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:43.180705    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.236378    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.237514    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.237514    6296 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:07:43.404492    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:07:43.404492    6296 ubuntu.go:71] root file system type: overlay
	I1217 02:07:43.405056    6296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:07:43.408624    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.465282    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.465408    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.465408    6296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:07:43.658319    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:07:43.662395    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.719191    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.719552    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.719552    6296 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:07:43.890999    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:43.890999    6296 machine.go:97] duration metric: took 4.9648419s to provisionDockerMachine
	I1217 02:07:43.890999    6296 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 02:07:43.890999    6296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:07:43.895385    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:07:43.899109    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.952181    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.085157    6296 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:07:44.092998    6296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:07:44.093086    6296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:07:44.093086    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:07:44.093465    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:07:44.094379    6296 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:07:44.099969    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:07:44.115031    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:07:44.146317    6296 start.go:296] duration metric: took 255.2637ms for postStartSetup
	I1217 02:07:44.150381    6296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:07:44.153098    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.206142    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.337637    6296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:07:44.346313    6296 fix.go:56] duration metric: took 6.1489614s for fixHost
	I1217 02:07:44.346313    6296 start.go:83] releasing machines lock for "newest-cni-383500", held for 6.1489614s
	I1217 02:07:44.350643    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:44.409164    6296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:07:44.413957    6296 ssh_runner.go:195] Run: cat /version.json
	I1217 02:07:44.414540    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.416694    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.466739    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.469418    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 02:07:44.591848    6296 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:07:44.598090    6296 ssh_runner.go:195] Run: systemctl --version
	I1217 02:07:44.614283    6296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:07:44.624324    6296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:07:44.628955    6296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:07:44.642200    6296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:07:44.642243    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:44.642333    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:44.642453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:44.671216    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:07:44.689408    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:07:44.702919    6296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:07:44.707856    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:07:44.727869    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.747180    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1217 02:07:44.751020    6296 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:07:44.751020    6296 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:07:44.766866    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.786853    6296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:07:44.806986    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:07:44.828346    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:07:44.848400    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:07:44.870349    6296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:07:44.887217    6296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:07:44.905216    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.047629    6296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:07:45.203749    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:45.203842    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:45.209421    6296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:07:45.236823    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.259331    6296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:07:45.337368    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.361492    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:07:45.381383    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:45.409600    6296 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:07:45.421762    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:07:45.435668    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:07:45.461708    6296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:07:45.616228    6296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:07:45.751670    6296 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:07:45.751670    6296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:07:45.778504    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:07:45.800985    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.956342    6296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:07:46.816501    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:07:46.840410    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:07:46.865817    6296 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:07:46.890943    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:46.914319    6296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:07:47.058242    6296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:07:47.214522    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.355565    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	W1217 02:07:47.472644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:47.382801    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:07:47.407455    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.558893    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:07:47.666138    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:47.686246    6296 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:07:47.690618    6296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:07:47.697013    6296 start.go:564] Will wait 60s for crictl version
	I1217 02:07:47.702316    6296 ssh_runner.go:195] Run: which crictl
	I1217 02:07:47.713878    6296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:07:47.755301    6296 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:07:47.758809    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.803772    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.845573    6296 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:07:47.849368    6296 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 02:07:47.978778    6296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:07:47.983162    6296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:07:47.993198    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.011887    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:48.072090    6296 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:07:48.073820    6296 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:07:48.073820    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:48.077080    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.110342    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.110411    6296 docker.go:621] Images already preloaded, skipping extraction
	I1217 02:07:48.113821    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.144461    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.144530    6296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:07:48.144530    6296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:07:48.144779    6296 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:07:48.149102    6296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:07:48.225894    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:48.225894    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:48.225894    6296 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:07:48.225894    6296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:07:48.226504    6296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:07:48.230913    6296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:07:48.243749    6296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:07:48.248634    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:07:48.262382    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:07:48.284386    6296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:07:48.306623    6296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 02:07:48.332101    6296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:07:48.341865    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.360919    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:48.498620    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:48.520308    6296 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 02:07:48.520346    6296 certs.go:195] generating shared ca certs ...
	I1217 02:07:48.520390    6296 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:48.520420    6296 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:07:48.521152    6296 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:07:48.521359    6296 certs.go:257] generating profile certs ...
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 02:07:48.522472    6296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 02:07:48.523217    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:07:48.523515    6296 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:07:48.523598    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:07:48.523888    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:07:48.524140    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:07:48.524399    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:07:48.525045    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:07:48.526649    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:07:48.558725    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:07:48.590333    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:07:48.621493    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:07:48.650907    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:07:48.678948    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:07:48.708871    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:07:48.738822    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:07:48.769873    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:07:48.801411    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:07:48.828208    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:07:48.859551    6296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:07:48.888197    6296 ssh_runner.go:195] Run: openssl version
	I1217 02:07:48.903194    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.920018    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:07:48.936734    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.943690    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.948571    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.997651    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:07:49.015514    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.035513    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:07:49.056511    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.065394    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.070742    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.117805    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:07:49.140198    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.156992    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:07:49.175485    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.184194    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.187479    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.237543    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:07:49.254809    6296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:07:49.269508    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:07:49.317073    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:07:49.365797    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:07:49.413853    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:07:49.462871    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:07:49.515512    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:07:49.558666    6296 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:49.563317    6296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:07:49.602899    6296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:07:49.616365    6296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:07:49.616365    6296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:07:49.622022    6296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:07:49.637152    6296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:07:49.641090    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.693295    6296 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.693843    6296 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-383500" cluster setting kubeconfig missing "newest-cni-383500" context setting]
	I1217 02:07:49.694722    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.716755    6296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:07:49.731850    6296 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:07:49.731850    6296 kubeadm.go:602] duration metric: took 115.4836ms to restartPrimaryControlPlane
	I1217 02:07:49.731850    6296 kubeadm.go:403] duration metric: took 173.1816ms to StartCluster
	I1217 02:07:49.731850    6296 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.731850    6296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.732839    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.734654    6296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:07:49.734654    6296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:07:49.734654    6296 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:70] Setting dashboard=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:49.734654    6296 addons.go:70] Setting default-storageclass=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.734654    6296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon dashboard=true in "newest-cni-383500"
	W1217 02:07:49.734654    6296 addons.go:248] addon dashboard should already be in state true
	I1217 02:07:49.735179    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.739634    6296 out.go:179] * Verifying Kubernetes components...
	I1217 02:07:49.743427    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.745812    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:49.809135    6296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:07:49.809532    6296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:07:49.812989    6296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:49.812989    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:07:49.816981    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.817010    6296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:07:49.818467    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:07:49.818467    6296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:07:49.823270    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.824987    6296 addons.go:239] Setting addon default-storageclass=true in "newest-cni-383500"
	I1217 02:07:49.825100    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.836645    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.889991    6296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:49.889991    6296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:07:49.892991    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.925992    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:49.943010    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.950996    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:50.005058    6296 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:07:50.009064    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.011068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.014077    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:07:50.014077    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:07:50.034057    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:07:50.034057    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:07:50.102553    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:07:50.102611    6296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:07:50.106900    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.124027    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:07:50.124027    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:07:50.189590    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:07:50.189677    6296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1217 02:07:50.190082    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.190082    6296 retry.go:31] will retry after 343.200838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.212250    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:07:50.212311    6296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:07:50.231619    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:07:50.231619    6296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:07:50.241078    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.241078    6296 retry.go:31] will retry after 338.608253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.254747    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:07:50.254794    6296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:07:50.277655    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:50.277655    6296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:07:50.303268    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.381205    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.381205    6296 retry.go:31] will retry after 204.689537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.510673    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.538343    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.585518    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.590250    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.625635    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.625793    6296 retry.go:31] will retry after 198.686568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.703247    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.703247    6296 retry.go:31] will retry after 199.792365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.713669    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.714671    6296 retry.go:31] will retry after 441.125735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.831068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.910787    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:50.921027    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.921027    6296 retry.go:31] will retry after 637.088373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.993148    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.993148    6296 retry.go:31] will retry after 819.774881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.009768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.161082    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:51.282295    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.282369    6296 retry.go:31] will retry after 677.278565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.510844    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.563702    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:51.642986    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.642986    6296 retry.go:31] will retry after 1.231128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.817677    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:51.902470    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.902470    6296 retry.go:31] will retry after 1.160161898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.964724    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:52.009393    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:52.053520    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.053520    6296 retry.go:31] will retry after 497.775491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.510530    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:52.556698    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:52.641425    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.641425    6296 retry.go:31] will retry after 893.419079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.880811    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:52.961643    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.961643    6296 retry.go:31] will retry after 1.354718896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.009905    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.068292    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:53.159843    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.159885    6296 retry.go:31] will retry after 830.811591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.510300    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.539679    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:53.634195    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.634195    6296 retry.go:31] will retry after 1.875797166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.997012    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:54.010116    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:54.085004    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.085004    6296 retry.go:31] will retry after 2.403477641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.321510    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:54.401677    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.401677    6296 retry.go:31] will retry after 2.197762331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.509750    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.011577    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.509949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.514301    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:55.590724    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:55.590724    6296 retry.go:31] will retry after 3.771224323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.010995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:56.493760    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:56.509755    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:56.580067    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.580067    6296 retry.go:31] will retry after 2.862008002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.606008    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:56.692846    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.693375    6296 retry.go:31] will retry after 3.419223727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:57.009866    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:57.510945    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:57.510327    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.010333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.511391    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.013796    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.367655    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:59.447582    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:59.457416    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.457416    6296 retry.go:31] will retry after 6.254269418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.510215    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:59.536524    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.536524    6296 retry.go:31] will retry after 4.240139996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.010517    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.118263    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:00.197472    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.197472    6296 retry.go:31] will retry after 5.486941273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.511349    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.012031    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.510877    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.510995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.511479    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.781390    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:03.867561    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:03.867561    6296 retry.go:31] will retry after 5.255488401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:04.011296    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:04.510695    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.011055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.510174    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.690069    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:05.718147    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:05.792389    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.792389    6296 retry.go:31] will retry after 3.294946391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:05.802187    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.802187    6296 retry.go:31] will retry after 6.599881974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:06.010721    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.509941    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.010092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:07.543861    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:07.511303    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.011059    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.511015    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.009909    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.092821    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:09.127423    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:09.180638    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.180716    6296 retry.go:31] will retry after 13.056189647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:09.211988    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.212069    6296 retry.go:31] will retry after 13.872512266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.510829    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.010907    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.513112    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.010572    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.509543    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.010570    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.409071    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:12.497495    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.497495    6296 retry.go:31] will retry after 9.788092681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.510004    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.011338    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.509984    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.010499    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.511126    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.010949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.511741    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.011278    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.511157    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.010863    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:17.577088    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:17.511273    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.010782    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.510594    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.011193    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.512050    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.011700    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.511001    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.010461    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.510457    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.011002    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.242227    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:22.290434    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:22.384800    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.384884    6296 retry.go:31] will retry after 11.75975207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:22.424758    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.424758    6296 retry.go:31] will retry after 15.557196078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.510556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.011645    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.090496    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:23.176544    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.176625    6296 retry.go:31] will retry after 13.26458747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.510872    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.011245    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.511483    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.011656    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.510967    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.012125    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.512672    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.011155    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:27.612061    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:27.512368    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.010889    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.511767    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.011035    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.512111    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.010919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.510464    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.010433    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.511392    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.010680    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.510963    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.011818    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.511638    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.011591    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.151810    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:34.242474    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.242474    6296 retry.go:31] will retry after 23.644538854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.513602    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.011269    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.511142    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.011267    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.446774    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:08:36.511283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:36.541778    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:36.541860    6296 retry.go:31] will retry after 14.024805043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:37.010743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:37.653192    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:37.510520    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.987959    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:08:38.011587    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:38.113276    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.113276    6296 retry.go:31] will retry after 20.609884455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.511817    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.012624    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.511353    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.011079    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.511636    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.011582    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.512671    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.011503    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.511640    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.011054    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.510485    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.011395    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.511333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.011435    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.513316    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.012600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.512307    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.012227    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.512888    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.011996    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.511276    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.011053    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.511776    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.011678    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:50.050889    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.050889    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.055201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:50.085770    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.085770    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.090316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:50.123762    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.123762    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.127529    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:50.157626    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.157626    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.163652    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:50.189945    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.189945    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.193620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:50.222819    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.222866    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.227818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:50.256909    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.256909    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.260970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:50.290387    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.290387    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.290387    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:50.290387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.357876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.357876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.420098    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.420098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.460376    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.460376    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.542989    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.542989    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:50.542989    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:50.570331    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:50.645772    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:50.645772    6296 retry.go:31] will retry after 16.344343138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:47.695483    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:53.075519    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.098924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:53.131675    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.131675    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.135542    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:53.166511    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.166511    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.170265    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:53.198547    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.198547    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.202694    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:53.232459    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.232459    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.235758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:53.263802    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.263802    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.268318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:53.296956    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.296956    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.301349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:53.331331    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.331331    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.335255    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:53.367520    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.367550    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.367577    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.367602    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.453750    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.453837    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:53.453887    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:53.485058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.485058    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.540050    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.540050    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.604101    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.604101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.146858    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.172227    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:56.203897    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.203941    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.207562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:56.236114    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.236114    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.240341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:56.274958    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.274958    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.280577    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:56.308906    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.308906    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.312811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:56.340777    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.340836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.343843    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:56.371408    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.371441    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.374771    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:56.406487    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.406487    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.410973    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:56.441247    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.441247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.441247    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.441247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.506877    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.506877    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.548841    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.548841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.633101    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.633101    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:56.633101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:56.659421    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.659457    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:57.892877    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:57.970838    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:57.970838    6296 retry.go:31] will retry after 27.385193451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.728649    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:58.834139    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.834680    6296 retry.go:31] will retry after 32.13321777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:59.213728    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.238361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:59.266298    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.266298    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.270295    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:59.299414    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.299414    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.302581    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:59.335627    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.335627    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.339238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:59.367042    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.367042    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.371258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:59.401507    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.401507    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.405468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:59.436657    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.436657    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.440955    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:59.471027    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.471027    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.474047    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:59.505164    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.505164    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.505164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:59.505164    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:59.533835    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.533835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.586695    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.587671    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.648841    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.648841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.688691    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.688691    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.777044    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.282707    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.307570    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:02.340326    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.340412    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.343993    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:02.374035    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.374079    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.377688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1217 02:08:57.736771    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:02.409724    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.409724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.414154    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:02.442993    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.442993    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.447591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:02.474966    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.474966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.479447    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:02.511675    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.511675    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.515939    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:02.544034    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.544034    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.548633    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:02.578196    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.578196    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.578196    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.578196    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.642449    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.643420    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:02.681562    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.681562    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.766017    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.766017    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:02.766017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:02.795166    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.795166    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.347132    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.372840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:05.424611    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.424686    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.428337    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:05.461682    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.461682    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.465790    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:05.495395    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.495395    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.499215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:05.528620    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.528620    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.532226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:05.560375    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.560375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.564119    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:05.595214    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.595214    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.600088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:05.633183    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.633183    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.636776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:05.664840    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.664840    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.664840    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.664840    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.718503    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.718503    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.781489    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.781489    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.821081    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.821081    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.905451    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.905451    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:05.905451    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:06.996471    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:09:07.077056    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:07.077056    6296 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:08.443326    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.470285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:08.499191    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.499191    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.503346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:08.531727    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.531727    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.535874    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:08.567724    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.567724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.571504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:08.601814    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.601814    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.605003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:08.638738    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.638815    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.642116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:08.672949    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.672949    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.676953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:08.706081    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.706145    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.709298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:08.737856    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.737856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.737856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.737856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.798236    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.798236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.838053    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.838053    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.925271    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.925271    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:08.925271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:08.952860    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.952934    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.505032    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.532273    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:11.560855    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.560907    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.564808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:11.595967    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.596024    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.599911    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:11.628443    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.628443    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.632103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:11.659899    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.659899    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.663896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:11.695830    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.695864    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.699333    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:11.728245    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.728314    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.731834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:11.762004    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.762038    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.765497    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:11.800437    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.800437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.800437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.800437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.850659    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.850659    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:11.927328    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.927328    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.968115    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.968115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:12.061366    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:12.061366    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:12.061366    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:07.775163    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:14.593463    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.619698    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:14.649625    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.649625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.653809    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:14.682807    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.682865    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.686225    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:14.716867    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.716867    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.720947    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:14.748712    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.748712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.753598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:14.786467    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.786467    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.790745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:14.820388    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.820388    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.824364    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:14.856683    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.856715    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.860387    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:14.907334    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.907388    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.907388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.907388    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.970536    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.971543    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:15.009837    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:15.009837    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:15.100833    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:15.100833    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:15.100833    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:15.129774    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:15.129838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.687506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.711884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:17.740676    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.740676    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.743807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:17.775526    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.775598    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.779196    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:17.810564    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.810564    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.815366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:17.847149    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.847149    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.850304    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:17.880825    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.880825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.884416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:17.913663    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.913663    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.917519    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:17.949675    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.949736    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.953399    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:17.981777    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.981777    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.981853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.981853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:18.045143    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:18.045143    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:18.085682    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:18.085682    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:18.174824    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:18.174862    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:18.174890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:18.201721    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:18.201721    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:20.754573    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.779418    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:20.815289    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.815336    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.821329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:20.849494    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.849566    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.853416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:20.886139    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.886213    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.890864    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:20.921623    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.921691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.925413    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:20.955001    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.955030    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.959115    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:20.986446    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.986446    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.990622    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:21.019381    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.019903    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:21.023386    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:21.049708    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.049708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:21.049708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:21.049708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:21.114512    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:21.114512    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:21.154312    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:21.154312    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:21.241835    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:21.241835    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:21.241835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:21.269935    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:21.269935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:17.811223    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:23.827385    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.851293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:23.884017    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.884017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.887852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:23.920819    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.920819    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.925124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:23.953397    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.953468    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.957090    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:23.987965    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.987965    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.992238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:24.021188    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.021188    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:24.027472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:24.059066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.059066    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:24.062927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:24.092066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.092066    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:24.096083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:24.130020    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.130083    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:24.130083    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:24.130083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:24.193264    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:24.193264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:24.233590    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:24.233590    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:24.334738    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:24.334738    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:24.334738    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:24.361711    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:24.361711    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:25.361736    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:09:25.443830    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:25.443830    6296 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:26.915928    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.940552    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:26.972265    6296 logs.go:282] 0 containers: []
	W1217 02:09:26.972334    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.975468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:27.004131    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.004131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:27.007688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:27.040755    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.040755    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:27.044298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:27.075607    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.075607    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:27.079764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:27.109726    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.109726    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:27.113807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:27.142060    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.142060    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:27.145049    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:27.179827    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.179898    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:27.183340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:27.212340    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.212340    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:27.212340    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:27.212340    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:27.290453    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:27.290453    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:27.290453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:27.317919    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:27.317919    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:27.372636    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:27.372636    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:27.434881    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:27.434881    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.980965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:30.007081    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:30.038766    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.038766    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:30.042837    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:30.074216    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.074277    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:30.077495    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:30.109815    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.109815    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:30.113543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:30.144692    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.144692    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:30.148595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:30.181530    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.181530    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:30.185056    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:30.230054    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.230054    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:30.233965    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:30.264421    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.264421    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:30.268191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:30.302463    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.302463    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:30.302463    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:30.302463    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:30.369905    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:30.369905    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:30.407364    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:30.407364    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:30.501045    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:30.501045    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:30.501045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:30.529058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:30.529119    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:30.973740    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:09:31.053832    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:31.053832    6296 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:31.057712    6296 out.go:179] * Enabled addons: 
	I1217 02:09:31.060716    6296 addons.go:530] duration metric: took 1m41.3245326s for enable addons: enabled=[]
	W1217 02:09:27.847902    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:33.093000    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:33.117479    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:33.148299    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.148299    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:33.152403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:33.180747    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.180747    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:33.184258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:33.214319    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.214389    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:33.217921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:33.244463    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.244463    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:33.248882    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:33.280520    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.280573    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:33.284251    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:33.313836    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.313883    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:33.318949    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:33.351545    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.351545    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:33.355242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:33.384638    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.384638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:33.384638    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:33.384638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:33.438624    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:33.438624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:33.503148    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:33.504145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:33.542770    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:33.542770    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:33.628872    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:33.628872    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:33.628872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:36.163766    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:36.190660    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:36.219485    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.219485    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:36.223169    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:36.253826    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.253826    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:36.257584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:36.289684    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.289684    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:36.293455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:36.321228    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.321228    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:36.326076    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:36.355893    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.355893    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:36.360432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:36.392307    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.392359    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:36.395377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:36.427797    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.427797    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:36.431432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:36.465462    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.465547    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:36.465590    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:36.465605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:36.515585    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:36.515688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:36.577828    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:36.577828    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:36.617923    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:36.617923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:36.706865    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:36.706865    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:36.706865    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.240583    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:39.269426    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:39.300548    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.300548    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:39.304455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:39.337640    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.337640    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:39.341427    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:39.375280    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.375280    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:39.379328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:39.408206    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.408291    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:39.413138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:39.439760    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.439760    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:39.443728    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:39.470865    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.471120    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:39.477630    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:39.510101    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.510101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:39.515759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:39.545423    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.545494    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:39.545494    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:39.545559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.574474    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:39.574474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:39.627410    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:39.627410    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:39.687852    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:39.687852    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:39.730823    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:39.730823    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:39.820771    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.326489    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:42.349989    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:42.381673    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.381673    6296 logs.go:284] No container was found matching "kube-apiserver"
	W1217 02:09:37.889672    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:42.385392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:42.414575    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.414575    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:42.418510    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:42.452120    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.452120    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:42.456157    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:42.484625    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.484625    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:42.487782    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:42.520235    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.520235    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:42.525546    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:42.558589    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.558589    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:42.561770    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:42.592364    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.592364    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:42.596368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:42.625522    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.625522    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:42.625522    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:42.625522    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:42.661616    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:42.661616    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:42.748046    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.748046    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:42.748046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:42.778854    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:42.778854    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:42.827860    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:42.827860    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.394220    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:45.418501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:45.453084    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.453132    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:45.457433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:45.491679    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.491679    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:45.495517    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:45.524934    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.524934    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:45.528788    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:45.559787    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.559837    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:45.563714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:45.608019    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.608104    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:45.612132    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:45.639869    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.639869    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:45.644002    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:45.671767    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.671767    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:45.675466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:45.704056    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.704104    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:45.704104    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:45.704104    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.766557    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:45.766557    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:45.807449    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:45.807449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:45.898686    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:45.898686    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:45.898686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:45.924614    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:45.924614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.482563    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:48.510137    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:48.546063    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.546063    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:48.551905    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:48.588536    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.588617    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:48.592628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:48.621540    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.621540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:48.625701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:48.653505    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.653505    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:48.659485    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:48.688940    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.689008    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:48.692649    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:48.718858    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.718858    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:48.722907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:48.752451    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.752451    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:48.755913    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:48.785865    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.785903    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:48.785903    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:48.785948    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.842730    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:48.843261    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:48.905352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:48.905352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:48.945271    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:48.945271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.027913    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.027963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:49.027963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:51.563182    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:51.587223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:51.619597    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.619621    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:51.623355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:51.652069    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.652152    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:51.655716    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:51.684602    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.684653    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:51.687735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:51.716327    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.716327    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:51.720054    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:51.750202    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.750266    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:51.753821    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:51.781863    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.781863    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:51.785648    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:51.814791    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.814841    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:51.818565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:51.850654    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.850654    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:51.850654    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:51.850654    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:51.912429    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:51.912429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:51.951795    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:51.951795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.035486    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.035486    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:52.035486    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:52.063472    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.063472    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:47.930106    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:54.631678    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:54.657392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:54.689037    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.689037    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:54.692460    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:54.723231    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.723231    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:54.729158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:54.759168    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.759168    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:54.762883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:54.792371    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.792371    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:54.796165    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:54.828375    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.828375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:54.832201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:54.862409    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.862476    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:54.866107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:54.897161    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.897161    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:54.900834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:54.947452    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.947452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:54.947452    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:54.947452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.016411    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.016411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.055628    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.055628    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.152557    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.152599    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:55.152599    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:55.180492    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.180492    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:57.741989    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:57.768328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:57.799200    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.799200    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:57.803065    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:57.832042    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.832042    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:57.835921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:57.863829    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.863891    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:57.867347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:57.896797    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.896822    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:57.900369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:57.929832    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.929907    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:57.933326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:57.960278    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.960278    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:57.964215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:57.992277    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.992324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:57.995951    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:58.026155    6296 logs.go:282] 0 containers: []
	W1217 02:09:58.026254    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.026254    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.026303    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.091999    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.091999    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.131520    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.131520    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.226831    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.226831    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:58.226831    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:58.256592    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.256635    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:00.809919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:00.842222    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:00.872955    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.872955    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:00.876666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:00.906031    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.906031    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:00.909593    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:00.939873    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.939946    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:00.943346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:00.972609    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.972643    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:00.975886    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:01.005269    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.005269    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.009766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:01.041677    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.041677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.048361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:01.081235    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.081312    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.084849    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:01.113437    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.113437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.113437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.113437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.160067    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.160624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.225071    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.225071    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.265307    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.265307    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.348506    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.348535    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:01.348571    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:57.967423    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:03.891628    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:03.925404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:03.965688    6296 logs.go:282] 0 containers: []
	W1217 02:10:03.965688    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:03.968982    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:04.006348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.006348    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.009769    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:04.039968    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.039968    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.044404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:04.078472    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.078472    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.081894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:04.113348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.113348    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.117138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:04.148885    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.148885    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.152756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:04.181559    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.181616    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.185351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:04.217017    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.217017    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.217017    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.217017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.284540    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.284540    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.324402    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.324402    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.409943    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:04.409943    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:04.409943    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:04.438771    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.438771    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:06.997897    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.024185    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:07.054915    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.055512    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.060167    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:07.089778    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.089778    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.093773    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:07.124641    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.124641    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.128016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:07.154834    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.154915    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.158505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:07.188568    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.188568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.192962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:07.225078    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.225078    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.228699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:07.258599    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.258659    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.262590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:07.291623    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.291623    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.291623    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:07.291623    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:07.322611    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.322611    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.374970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.374970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.438795    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.438795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:07.479442    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.479442    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.566162    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.072312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.096505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:10.125617    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.125617    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.129377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:10.157921    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.157921    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.161850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:10.191705    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.191705    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.196003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:10.224412    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.224482    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.229368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:10.258140    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.258140    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.261205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:10.292047    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.292047    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.296511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:10.325818    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.325818    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.329752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:10.359454    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.359530    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.359530    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.359530    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:10.413970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.413970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.476665    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.476665    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.516335    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.516335    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.602353    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.602353    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:10.602353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:10:08.007712    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:13.134148    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:13.159720    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:13.191534    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.191534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.195626    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:13.230035    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.230035    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.233817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:13.266476    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.266476    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.270598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:13.305852    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.305852    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.310349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:13.341805    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.341867    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.345346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:13.377945    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.377945    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.381659    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:13.411885    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.411957    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.416039    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:13.446642    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.446642    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.446642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.446642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.487083    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.487083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.574632    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.574632    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:13.574632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.604181    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.604702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:13.660020    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.660020    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.225038    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:16.248922    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:16.280247    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.280247    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:16.284285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:16.312596    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.312596    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:16.316952    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:16.345108    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.345108    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:16.348083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:16.377403    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.377403    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.380619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:16.410555    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.410555    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.414048    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:16.446454    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.446454    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.449405    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:16.478967    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.478967    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.484108    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:16.516422    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.516422    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.516422    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.516422    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.580305    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.580305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.618663    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.618663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.705105    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.705105    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:16.705105    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:16.732046    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.732046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:19.284431    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:19.307909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:19.340842    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.340842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:19.344830    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:19.371150    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.371150    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:19.374863    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:19.403216    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.403216    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:19.406907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:19.433979    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.433979    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:19.438046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:19.469636    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.469636    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:19.473675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:19.504296    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.504296    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.508671    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:19.535932    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.535932    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.539707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:19.567355    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.567416    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.567416    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.567416    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.629876    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.629876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.678547    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.678547    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.785306    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.785306    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:19.785371    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:19.813137    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.813137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.369643    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:10:18.049946    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:22.396731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:22.431018    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.431018    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:22.434688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:22.463307    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.463307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:22.467323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:22.497065    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.497065    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:22.500574    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:22.531497    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.531564    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:22.535088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:22.563706    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.563779    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:22.567344    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:22.602516    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.602597    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:22.606242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:22.637637    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.637699    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:22.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:22.668078    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.668078    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.668078    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.668078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.754963    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.754963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:22.754963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:22.783172    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.783222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.840048    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.840048    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.900137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.900137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.445900    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:25.472646    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:25.502929    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.502929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:25.506274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:25.537721    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.537721    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:25.543044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:25.572924    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.572924    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:25.576391    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:25.607737    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.607798    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:25.611457    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:25.644967    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.645041    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:25.648690    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:25.677801    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.677801    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:25.681530    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:25.709148    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.709148    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:25.715667    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:25.746892    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.746892    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:25.746892    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:25.746892    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:25.796336    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:25.796336    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.862353    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.862353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.902100    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.902100    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.988926    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.988926    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:25.988926    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:28.523475    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:28.549366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:28.580055    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.580055    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:28.583822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:28.615168    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.615168    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:28.618724    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:28.650344    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.650368    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:28.654014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:28.704033    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.704033    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:28.707699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:28.738871    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.738938    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:28.743270    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:28.775432    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.775432    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:28.779176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:28.810234    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.810351    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:28.814357    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:28.845783    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.845783    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:28.845783    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:28.845783    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:28.902626    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:28.902626    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.963758    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.963758    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:29.002141    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:29.002141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:29.104674    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:29.104674    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:29.104674    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:31.640270    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:31.668862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:31.703099    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.703099    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:31.706355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:31.737408    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.737408    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:31.741549    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:31.771462    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.771549    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:31.775645    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:31.803600    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.803600    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:31.807313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:31.835884    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.835884    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:31.840000    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:31.870518    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.870518    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:31.877548    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:31.905387    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.905387    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:31.909722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:31.938258    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.938284    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:31.938284    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:31.938284    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:32.000115    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:32.000115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:32.039351    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:32.039351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:32.128849    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:32.128849    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:32.128849    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:32.155670    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:32.155670    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:28.083644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:34.707099    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:34.732689    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:34.763625    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.763625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:34.767349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:34.797435    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.797435    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:34.801415    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:34.828785    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.828785    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:34.832654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:34.864748    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.864748    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:34.868392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:34.896365    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.896365    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:34.900474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:34.932681    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.932681    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:34.936571    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:34.966056    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.966056    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:34.969208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:34.998362    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.998362    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:34.998362    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:34.998362    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:35.036977    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:35.036977    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:35.134841    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:35.134841    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:35.134841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:35.162429    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:35.162429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:35.213960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:35.214015    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:37.779857    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:37.806799    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:37.840730    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.840730    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:37.846443    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:37.875504    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.875504    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:37.879215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:37.910068    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.910068    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:37.913551    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:37.942897    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.942897    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:37.946741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:37.978321    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.978321    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:37.982267    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:38.008421    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.008421    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:38.013043    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:38.043041    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.043041    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:38.049737    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:38.082117    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.082117    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:38.082117    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:38.082117    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:38.148970    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:38.148970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:38.189697    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:38.189697    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:38.276122    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:38.276122    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:38.276122    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:38.304355    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:38.304355    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:40.862712    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:40.889041    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:40.921169    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.921169    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:40.924297    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:40.956313    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.956356    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:40.960294    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:40.990144    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.990144    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:40.993876    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:41.026732    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.026803    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:41.030745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:41.073825    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.073825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:41.078152    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:41.105859    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.105859    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:41.111714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:41.143286    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.143324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:41.146776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:41.176314    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.176345    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:41.176345    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:41.176345    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:41.213266    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:41.213266    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:41.300305    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:41.300305    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:41.300305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:41.328560    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:41.328621    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:41.375953    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:41.375953    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:38.119927    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:43.941613    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:43.967455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:44.000199    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.000199    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:44.003568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:44.035058    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.035058    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:44.040590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:44.083687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.083687    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:44.087476    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:44.115776    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.115776    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:44.119318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:44.155471    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.155513    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:44.159433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:44.191599    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.191636    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:44.195145    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:44.228181    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.228211    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:44.231971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:44.259687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.259763    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:44.259763    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:44.259763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:44.323705    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:44.323705    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:44.365401    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:44.365401    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:44.453893    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:44.453893    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:44.453893    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:44.480694    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:44.480694    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.042501    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:47.067663    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:47.108433    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.108433    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:47.112206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:47.144336    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.144336    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:47.148449    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:47.182968    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.183049    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:47.186614    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:47.215738    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.215738    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:47.219595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:47.248444    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.248511    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:47.252434    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:47.280975    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.280975    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:47.284966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:47.317178    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.317178    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:47.321223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:47.352638    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.352638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:47.352638    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:47.352638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:47.390049    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:47.390049    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:47.479425    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:47.479425    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:47.479425    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:47.505331    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:47.505331    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.556431    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:47.556431    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.124255    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:50.151100    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:50.184499    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.184565    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:50.187696    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:50.221764    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.221764    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:50.225471    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:50.253823    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.253823    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:50.260470    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:50.289768    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.289815    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:50.295283    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:50.321597    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.321597    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:50.325774    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:50.356707    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.356707    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:50.360685    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:50.390099    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.390099    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:50.393971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:50.420950    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.420950    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:50.420950    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:50.420950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.484730    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:50.484730    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:50.523997    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:50.523997    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:50.618256    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:50.618256    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:50.618256    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:50.645077    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:50.645077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:48.158175    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:53.200622    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:53.223348    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:53.253589    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.253589    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:53.258688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:53.287647    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.287689    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:53.291555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:53.324358    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.324403    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:53.327650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:53.355417    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.355417    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:53.359780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:53.390012    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.390012    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:53.393536    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:53.420636    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.420672    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:53.424429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:53.453665    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.453744    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:53.456764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:53.486769    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.486836    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:53.486875    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:53.486875    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:53.552513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:53.552513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:53.593054    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:53.593054    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:53.683171    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:53.683207    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:53.683230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:53.712513    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:53.712513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:56.288600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:56.314380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:56.347447    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.347447    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:56.351158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:56.381779    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.381779    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:56.385232    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:56.423000    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.423000    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:56.427083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:56.456635    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.456635    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:56.460509    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:56.490868    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.490868    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:56.496594    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:56.523671    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.523671    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:56.527847    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:56.559992    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.559992    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:56.565352    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:56.591708    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.591708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:56.591708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:56.591708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:56.656572    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:56.656572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:56.696334    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:56.696334    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:56.788411    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:56.788411    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:56.788411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:56.815762    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:56.815762    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.370676    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:59.404615    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:59.440735    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.440735    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:59.446758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:59.475209    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.475209    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:59.479521    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:59.509465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.509465    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:59.513228    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:59.542409    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.542409    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:59.546008    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:59.575778    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.575778    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:59.579759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:59.613465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.613465    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:59.617266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:59.645245    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.645245    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:59.649170    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:59.680413    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.680449    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:59.680449    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:59.680449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:59.713987    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:59.713987    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.764930    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:59.764994    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:59.832077    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:59.832077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:59.870681    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:59.870681    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:59.953336    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:10:58.200115    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:11:02.457745    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:02.492666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:02.526665    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.526665    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:02.530862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:02.560353    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.560413    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:02.564099    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:02.595430    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.595430    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:02.599884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:02.629744    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.629744    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:02.633637    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:02.662623    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.662623    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:02.666817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:02.694696    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.694696    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:02.698194    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:02.727384    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.727442    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:02.731483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:02.766114    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.766114    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:02.766114    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:02.766114    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:02.830755    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:02.830755    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:02.870216    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:02.870216    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:02.958327    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.958327    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:02.958380    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:02.984980    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:02.984980    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.540158    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:05.564812    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:05.595638    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.595638    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:05.599748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:05.628748    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.628748    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:05.632878    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:05.666232    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.666257    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:05.670293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:05.699654    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.699654    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:05.703004    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:05.733113    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.733113    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:05.737096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:05.765591    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.765639    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:05.770398    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:05.796360    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.796360    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:05.800240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:05.829847    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.829914    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:05.829914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:05.829945    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.880789    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:05.880789    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:05.943002    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:05.943002    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:05.983389    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:05.983389    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.076023    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.076023    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:06.076023    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:08.608606    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:08.632215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:08.665017    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.665017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:08.669299    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:08.695355    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.695355    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:08.699306    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:08.729054    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.729054    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:08.732454    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:08.759881    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.759881    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:08.764328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:08.793695    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.793777    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:08.797908    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:08.826225    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.826225    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:08.829679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:08.859645    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.859645    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:08.863083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:08.893657    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.893657    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:08.893657    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:08.893657    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:08.958163    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:08.958163    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:08.997418    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:08.997418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.087973    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.087973    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:09.087973    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:09.115687    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.115687    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:11.697770    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.725676    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:11.758809    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.758809    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:11.762929    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:11.794198    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.794198    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:11.798023    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:11.828890    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.828890    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:11.833358    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:11.865217    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.865217    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:11.868915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:11.897672    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.897672    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:11.901235    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:11.931725    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.931808    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:11.935264    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:11.966263    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.966263    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:11.970422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:11.999856    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.999856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:11.999856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:11.999856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.064137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.064137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.102491    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.102491    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.183568    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.183568    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:12.183568    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:12.212178    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.212178    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:11:08.241744    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:16.871278    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 02:11:16.871278    6768 node_ready.go:38] duration metric: took 6m0.0008728s for node "no-preload-184000" to be "Ready" ...
	I1217 02:11:16.874572    6768 out.go:203] 
	W1217 02:11:16.876457    6768 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:11:16.876457    6768 out.go:285] * 
	W1217 02:11:16.879042    6768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:11:16.881673    6768 out.go:203] 
	I1217 02:11:14.772821    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.797656    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:14.826900    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.826900    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.829894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:14.859202    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.859202    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.862783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:14.891414    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.891414    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.895052    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:14.925404    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.925404    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:14.928966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:14.959295    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.959330    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:14.962893    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:14.991696    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.991730    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:14.994776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:15.025468    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.025468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.031674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:15.060661    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.060661    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.060733    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.060733    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.120513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.120513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.159608    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.159608    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.244418    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.244418    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:15.244418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:15.271288    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.271288    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.830556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.850600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:17.886696    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.886696    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.890674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:17.921702    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.921702    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.924697    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:17.952692    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.952692    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.956701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:17.984691    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.984691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.988655    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:18.024626    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.024663    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:18.028558    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:18.060310    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.060310    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.064024    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:18.100124    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.100124    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.104105    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:18.141223    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.141223    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.141223    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.141223    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.179686    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.179686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.311240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.311240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:18.311240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:18.342566    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.342615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.393872    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.393872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.977693    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:21.006733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:21.035136    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.035201    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:21.039202    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:21.069636    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.069636    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:21.075448    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:21.105437    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.105437    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:21.108735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:21.136602    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.136602    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:21.140124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:21.168674    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.168674    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:21.172368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:21.204723    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.204723    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:21.208123    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:21.237130    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.237130    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:21.240654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:21.268170    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.268170    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:21.268170    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:21.268170    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:21.333642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:21.333642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:21.372230    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:21.372230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:21.467012    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:21.467012    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:21.467012    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:21.495867    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:21.495867    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.053568    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:24.079587    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:24.110362    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.110399    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:24.113326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:24.141818    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.141818    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:24.145313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:24.172031    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.172031    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:24.176197    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:24.205114    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.205133    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:24.208437    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:24.238244    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.238244    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:24.242692    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:24.271687    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.271687    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:24.276384    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:24.307922    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.307922    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:24.311538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:24.350108    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.350108    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:24.350108    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:24.350108    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.402159    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:24.402224    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:24.463824    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:24.463824    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:24.503645    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:24.503645    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:24.591969    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:24.591969    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:24.591969    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.123965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:27.157839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:27.199991    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.199991    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:27.204206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:27.231981    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.231981    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:27.235568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:27.265668    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.265668    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:27.269162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:27.299488    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.299488    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:27.303277    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:27.335769    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.335769    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:27.339516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:27.369112    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.369112    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:27.372881    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:27.402031    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.402031    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:27.405780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:27.436610    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.436610    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:27.436610    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:27.436610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:27.523394    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:27.523917    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:27.523957    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.552476    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:27.552476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:27.607026    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:27.607078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:27.670834    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:27.670834    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.216027    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:30.241711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:30.272275    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.272275    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:30.276071    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:30.304635    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.304635    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:30.307639    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:30.340374    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.340374    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:30.343758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:30.374162    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.374162    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:30.378010    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:30.407836    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.407836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:30.411411    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:30.440002    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.440002    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:30.443429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:30.472647    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.472647    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:30.476538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:30.510744    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.510744    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:30.510744    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:30.510744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:30.575069    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:30.575156    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:30.639732    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:30.640731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.685195    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:30.685195    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:30.775246    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:30.775295    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:30.775295    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.308109    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:33.334329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:33.365061    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.365061    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:33.370854    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:33.399488    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.399488    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:33.406335    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:33.436434    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.436434    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:33.439783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:33.468947    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.468947    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:33.474014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:33.502568    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.502568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:33.506146    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:33.535706    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.535706    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:33.540016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:33.573811    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.573811    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:33.577712    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:33.606321    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.606321    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:33.606321    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:33.606321    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:33.671884    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:33.671884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:33.712095    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:33.712095    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:33.800767    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:33.800848    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:33.800884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.829402    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:33.829474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:36.410236    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:36.438912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:36.468229    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.468229    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:36.472231    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:36.501220    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.501220    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:36.506462    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:36.539556    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.539556    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:36.543603    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:36.584367    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.584367    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:36.588513    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:36.620670    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.620670    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:36.626030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:36.654239    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.654239    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:36.658962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:36.689023    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.689023    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:36.693754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:36.721351    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.721351    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:36.721351    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:36.721351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:36.787832    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:36.787832    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:36.828019    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:36.828019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:36.916923    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:36.916923    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:36.916923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:36.946231    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:36.946265    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.498459    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:39.522909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:39.553462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.553462    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:39.557524    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:39.585462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.585462    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:39.591342    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:39.619332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.619399    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:39.623096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:39.651071    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.651071    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:39.654766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:39.683502    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.683502    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:39.687390    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:39.715332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.715332    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:39.718932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:39.749019    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.749019    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:39.752739    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:39.783378    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.783378    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:39.783378    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:39.783378    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.835019    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:39.835019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:39.899542    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:39.899542    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:39.938717    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:39.938717    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:40.026359    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:40.026403    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:40.026446    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:42.561805    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:42.585507    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:42.613091    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.613091    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:42.616991    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:42.647608    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.647608    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:42.651380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:42.680540    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.680540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:42.683625    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:42.717014    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.717014    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:42.721369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:42.750017    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.750017    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:42.753961    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:42.785164    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.785164    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:42.788883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:42.817424    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.817424    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:42.821266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:42.853247    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.853247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:42.853247    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:42.853247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:42.910034    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:42.910052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:42.970436    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:42.970436    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:43.009833    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:43.010830    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:43.102803    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:43.102803    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:43.102803    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.636418    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:45.661677    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:45.695141    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.695141    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:45.699189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:45.729376    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.729376    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:45.733753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:45.764365    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.764365    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:45.767917    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:45.799287    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.799287    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:45.802968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:45.835270    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.835270    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:45.838766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:45.868660    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.868660    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:45.875727    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:45.903566    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.903566    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:45.907562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:45.937452    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.937452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:45.937452    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:45.937452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.965091    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:45.965091    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:46.013173    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:46.013173    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:46.077113    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:46.077113    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:46.118527    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:46.118527    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:46.207662    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:48.714055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:48.741412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:48.772767    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.772767    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:48.776092    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:48.804946    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.805020    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:48.808538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:48.837488    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.837488    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:48.840453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:48.871139    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.871139    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:48.875518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:48.904264    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.904264    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:48.911351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:48.939118    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.939118    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:48.943340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:48.970934    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.970934    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:48.974990    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:49.005140    6296 logs.go:282] 0 containers: []
	W1217 02:11:49.005174    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:49.005205    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:49.005234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:49.075925    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:49.075925    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:49.116144    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:49.116144    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:49.196968    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:49.197074    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:49.197074    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:49.222883    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:49.223404    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:51.783312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:51.809151    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:51.839751    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.839751    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:51.844016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:51.895178    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.895178    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:51.899341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:51.930311    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.930311    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:51.933797    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:51.961857    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.961857    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:51.966036    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:51.993647    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.993647    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:51.997672    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:52.026485    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.026485    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:52.032726    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:52.062039    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.062039    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:52.066379    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:52.096772    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.096772    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:52.096772    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:52.096772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:52.163369    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:52.163369    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:52.203719    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:52.203719    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:52.295324    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:52.295324    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:52.295324    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:52.323234    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:52.323234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:54.878824    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:54.907441    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:54.944864    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.944864    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:54.948030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:54.980769    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.980769    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:54.987506    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:55.019726    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.019726    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:55.024226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:55.052618    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.052618    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:55.056658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:55.085528    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.085607    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:55.089212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:55.120453    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.120525    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:55.124591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:55.154725    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.154749    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:55.157707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:55.187692    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.187692    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:55.187692    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:55.187692    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:55.252848    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:55.252848    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:55.318197    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:55.318197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:55.358145    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:55.358145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:55.439213    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:55.439213    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:55.439744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:57.972346    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:57.997412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:58.029794    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.029794    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:58.033582    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:58.064729    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.064729    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:58.068722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:58.103854    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.103854    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:58.107069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:58.140767    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.140767    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:58.145080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:58.172792    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.172792    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:58.177038    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:58.205809    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.205809    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:58.209371    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:58.236353    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.236353    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:58.240621    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:58.269469    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.269469    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:58.269469    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:58.269469    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:58.324960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:58.324960    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:58.384708    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:58.384708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:58.423476    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:58.423476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:58.512328    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:58.512387    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:58.512387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.044354    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:01.073699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:01.104765    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.104836    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:01.107915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:01.141131    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.141131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:01.145209    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:01.174536    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.174536    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:01.178187    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:01.209172    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.209172    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:01.212803    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:01.241435    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.241486    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:01.245545    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:01.277115    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.277115    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:01.281366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:01.312158    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.312158    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:01.316725    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:01.343220    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.343220    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:01.343220    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:01.343220    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:01.382233    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:01.382233    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:01.487570    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:01.488578    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:01.488578    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.514572    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:01.514572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:01.567754    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:01.567754    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.140604    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:04.165376    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:04.197379    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.197379    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:04.202896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:04.231436    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.231506    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:04.235354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:04.267960    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.267960    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:04.271789    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:04.301108    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.301108    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:04.305219    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:04.334515    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.334515    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:04.338693    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:04.366071    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.366071    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:04.369958    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:04.398457    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.398457    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:04.405087    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:04.432495    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.432495    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:04.432495    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:04.432495    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.492454    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:04.492454    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:04.530878    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:04.530878    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:04.615739    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:04.615739    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:04.615739    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:04.643270    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:04.643304    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.195429    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:07.221998    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:07.254842    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.254842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:07.258578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:07.291820    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.291820    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:07.297979    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:07.329603    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.329603    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:07.334181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:07.363276    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.363324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:07.367248    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:07.394630    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.394695    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:07.398679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:07.425998    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.425998    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:07.429814    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:07.458824    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.458878    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:07.462682    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:07.490543    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.490614    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:07.490614    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:07.490614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:07.575806    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:07.575806    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:07.576816    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:07.607910    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:07.607910    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.659155    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:07.659155    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:07.722240    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:07.722240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.270711    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:10.295753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:10.324920    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.324920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:10.328903    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:10.358180    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.358218    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:10.362249    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:10.390135    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.390135    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:10.393738    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:10.423058    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.423090    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:10.426534    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:10.456745    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.456745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:10.463439    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:10.493765    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.493765    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:10.497858    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:10.526425    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.526425    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:10.532217    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:10.563338    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.563338    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:10.563338    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:10.563338    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:10.627669    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:10.627669    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.666455    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:10.666455    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:10.755613    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:10.755613    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:10.755613    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:10.786516    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:10.787045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:13.342631    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:13.368870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:13.402304    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.402347    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:13.408012    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:13.436633    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.436710    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:13.439877    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:13.468754    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.469007    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:13.473752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:13.505247    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.505324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:13.509766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:13.538745    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.538745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:13.542743    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:13.571986    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.571986    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:13.575522    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:13.604002    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.604002    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:13.608063    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:13.636028    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.636028    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:13.636028    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:13.636028    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:13.701418    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:13.701418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:13.740729    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:13.740729    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:13.830687    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:13.830746    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:13.830768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:13.856732    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:13.856732    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.415071    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:16.441827    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:16.474920    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.474920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:16.478560    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:16.509149    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.509149    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:16.512927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:16.544114    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.544114    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:16.547867    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:16.578111    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.578111    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:16.581776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:16.610586    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.610586    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:16.614807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:16.644103    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.644103    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:16.647954    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:16.692289    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.692289    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:16.696153    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:16.727229    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.727229    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:16.727229    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:16.727229    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:16.823236    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:16.823236    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:16.823236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:16.849827    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:16.849827    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.905388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:16.905414    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:16.965153    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:16.965153    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.511192    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:19.537347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:19.568920    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.568920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:19.573318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:19.604587    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.604587    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:19.608244    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:19.637707    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.637732    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:19.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:19.669047    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.669047    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:19.672932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:19.703243    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.703243    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:19.706862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:19.738948    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.738948    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:19.742483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:19.773620    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.773620    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:19.777766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:19.807218    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.807218    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:19.807218    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:19.807218    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:19.872750    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:19.872750    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.912835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:19.912835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:19.997398    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:19.997398    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:19.997398    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:20.025629    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:20.025629    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:22.593289    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:22.619754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:22.652929    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.652929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:22.657635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:22.689768    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.689846    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:22.693504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:22.720087    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.720087    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:22.723840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:22.752902    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.752959    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:22.757109    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:22.787369    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.787369    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:22.791584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:22.822117    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.822117    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:22.825675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:22.856022    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.856022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:22.859609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:22.886982    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.886982    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:22.886982    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:22.886982    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:22.972988    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:22.972988    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:22.972988    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:23.002037    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:23.002037    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:23.061548    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:23.061548    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:23.124352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:23.124352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:25.670974    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:25.706279    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:25.741150    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.741150    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:25.745079    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:25.773721    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.773782    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:25.779777    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:25.808516    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.808516    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:25.813011    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:25.844755    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.844755    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:25.848591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:25.877332    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.877332    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:25.881053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:25.907973    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.907973    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:25.914424    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:25.941138    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.941138    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:25.945025    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:25.974760    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.974760    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:25.974760    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:25.974760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:26.012354    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:26.012354    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:26.113177    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:26.113177    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:26.113177    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:26.144162    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:26.144245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:26.194605    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:26.195138    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:28.763811    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:28.789762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:28.820544    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.820544    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:28.824807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:28.855728    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.855728    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:28.860354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:28.894655    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.894655    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:28.898069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:28.928310    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.928394    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:28.932124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:28.967209    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.967209    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:28.973126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:29.002975    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.003024    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:29.006839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:29.044805    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.044881    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:29.049158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:29.078108    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.078142    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:29.078174    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:29.078202    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:29.142751    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:29.142751    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:29.182082    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:29.182082    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:29.271566    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:29.271596    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:29.271643    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:29.299332    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:29.299332    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:31.856743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:31.882741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:31.912323    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.912372    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:31.917046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:31.948587    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.948631    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:31.952095    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:31.981682    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.981682    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:31.985888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:32.022173    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.022173    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:32.026061    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:32.070026    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.070026    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:32.074016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:32.105255    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.105255    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:32.109062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:32.140873    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.140947    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:32.143941    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:32.172848    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.172876    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:32.172876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:32.172876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:32.237207    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:32.237207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:32.275838    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:32.275838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:32.360656    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:32.360656    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:32.360656    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:32.391099    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:32.391099    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:34.970955    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:35.002200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:35.036658    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.036658    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:35.041208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:35.068998    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.068998    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:35.075758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:35.105253    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.105253    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:35.109356    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:35.137411    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.137411    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:35.141289    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:35.168542    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.168542    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:35.174717    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:35.204677    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.204677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:35.209675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:35.240901    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.240901    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:35.244034    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:35.276453    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.276453    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:35.276453    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:35.276453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:35.341158    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:35.341158    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:35.381822    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:35.381822    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:35.472890    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:35.472890    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:35.472890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:35.501374    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:35.501374    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.054644    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:38.080787    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:38.112397    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.112420    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:38.116070    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:38.144341    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.144396    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:38.148080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:38.177159    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.177159    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:38.181253    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:38.210000    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.210000    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:38.215709    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:38.243526    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.243526    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:38.247620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:38.278443    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.278443    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:38.282504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:38.314414    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.314414    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:38.317968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:38.345306    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.345306    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:38.345306    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:38.345412    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:38.425240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:38.425240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:38.425240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:38.455129    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:38.455129    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.514775    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:38.514775    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:38.574255    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:38.574255    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.116537    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:41.139650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:41.169726    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.169814    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:41.173285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:41.204812    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.204812    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:41.208892    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:41.235980    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.235980    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:41.240200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:41.271415    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.271415    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:41.275005    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:41.303967    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.303967    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:41.309707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:41.340401    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.340401    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:41.343688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:41.374008    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.374008    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:41.377325    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:41.409502    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.409563    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:41.409563    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:41.409610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:41.472168    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:41.472168    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.513098    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:41.513098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:41.601716    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:41.601716    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:41.601716    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:41.629092    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:41.629148    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:44.185012    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:44.210566    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:44.242274    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.242274    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:44.248762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:44.280241    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.280307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:44.283818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:44.312929    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.312997    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:44.316643    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:44.343840    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.343840    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:44.347619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:44.378547    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.378547    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:44.382595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:44.410908    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.410908    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:44.414686    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:44.448329    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.448329    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:44.453888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:44.484842    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.484842    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:44.484842    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:44.484842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:44.550740    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:44.550740    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:44.589666    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:44.589666    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:44.677625    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:44.677625    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:44.677625    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:44.706051    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:44.706051    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:47.257477    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:47.286845    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:47.315563    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.315563    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:47.319220    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:47.351319    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.351319    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:47.354946    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:47.382237    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.382237    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:47.386106    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:47.415608    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.415608    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:47.419575    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:47.449212    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.449241    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:47.452978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:47.482356    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.482356    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:47.486511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:47.518156    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.518205    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:47.522254    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:47.550631    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.550631    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:47.550631    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:47.550727    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:47.615950    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:47.615950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:47.655928    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:47.655928    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:47.744126    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:47.744164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:47.744210    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:47.773502    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:47.773502    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.331328    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:50.368555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:50.407443    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.407443    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:50.411798    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:50.440520    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.440544    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:50.444430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:50.478050    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.478050    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:50.481848    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:50.513603    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.513658    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:50.517565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:50.551935    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.552946    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:50.556641    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:50.591171    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.591171    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:50.594981    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:50.624821    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.624821    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:50.628756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:50.661209    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.661209    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:50.661209    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:50.661209    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:50.693141    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:50.693141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.746322    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:50.746322    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:50.805974    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:50.805974    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:50.844572    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:50.844572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:50.935133    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.441690    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:53.466017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:53.494846    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.494846    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:53.499634    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:53.530839    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.530839    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:53.534661    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:53.567189    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.567189    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:53.571412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:53.598763    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.598763    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:53.602673    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:53.629791    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.629791    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:53.632953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:53.662323    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.662323    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:53.665394    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:53.695745    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.695745    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:53.701403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:53.735348    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.735348    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:53.735348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:53.735348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:53.816532    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.816532    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:53.816532    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:53.843453    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:53.843453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:53.893853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:53.893853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:53.954759    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:53.954759    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.499506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:56.525316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:56.561689    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.561738    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:56.565616    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:56.594009    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.594009    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:56.599822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:56.624101    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.624101    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:56.628604    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:56.657977    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.658063    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:56.663240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:56.694316    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.694316    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:56.698763    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:56.728527    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.728527    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:56.734446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:56.765315    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.765315    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:56.769182    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:56.796198    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.796198    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:56.796198    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:56.796198    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:56.864777    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:56.864777    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.904264    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:56.904264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:57.000434    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:57.000434    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:57.000434    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:57.034757    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:57.034842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:59.601768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:59.627731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:59.657009    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.657009    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:59.660962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:59.690428    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.690428    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:59.694181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:59.723517    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.723592    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:59.727191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:59.756251    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.756251    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:59.759627    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:59.791516    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.791516    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:59.795602    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:59.828192    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.828192    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:59.832003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:59.860258    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.860258    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:59.863635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:59.893207    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.893207    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:59.893207    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:59.893207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:59.958927    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:59.958927    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:00.004703    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:00.004703    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:00.096612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:00.096612    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:00.096612    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:00.124914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:00.124975    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:02.682962    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:02.708543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:02.737663    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.737663    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:02.741817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:02.772482    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.772482    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:02.778562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:02.806978    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.806978    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:02.813021    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:02.845688    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.845688    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:02.851578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:02.880144    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.880200    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:02.883811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:02.918466    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.918544    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:02.922186    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:02.951702    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.951702    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:02.955491    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:02.984638    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.984638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:02.984638    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:02.984638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:03.047941    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:03.047941    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:03.086964    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:03.086964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:03.173007    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:03.173086    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:03.173086    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:03.202017    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:03.202544    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:05.761010    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:05.786319    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:05.819785    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.819785    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:05.825532    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:05.853318    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.853318    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:05.858274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:05.887613    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.887613    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:05.891162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:05.919471    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.919471    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:05.922933    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:05.955441    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.955441    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:05.959241    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:05.984925    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.984925    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:05.989009    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:06.021101    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.021101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:06.024383    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:06.055098    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.055098    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:06.055098    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:06.055098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:06.107743    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:06.107743    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:06.170319    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:06.170319    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:06.210360    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:06.210360    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:06.299194    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:06.299194    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:06.299194    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:08.832901    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:08.860263    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:08.890111    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.890111    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:08.893617    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:08.921989    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.921989    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:08.925561    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:08.952883    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.952883    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:08.959516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:08.991347    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.991347    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:08.995066    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:09.028011    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.028011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:09.032096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:09.060803    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.060803    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:09.064596    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:09.093542    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.093572    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:09.096987    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:09.123594    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.123615    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:09.123615    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:09.123615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:09.176222    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:09.176222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:09.238935    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:09.238935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:09.278804    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:09.278804    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:09.367283    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:09.367283    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:09.367283    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:11.901781    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:11.930493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:11.963534    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.963534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:11.967747    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:11.997700    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.997700    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:12.001601    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:12.031862    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.031862    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:12.035544    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:12.066506    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.066506    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:12.071472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:12.103184    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.103184    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:12.107033    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:12.135713    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.135713    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:12.139268    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:12.170350    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.170350    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:12.174053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:12.202964    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.202964    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:12.202964    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:12.202964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:12.252669    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:12.253197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:12.318088    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:12.318088    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:12.356768    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:12.356768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:12.443857    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:12.443857    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:12.443857    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:14.980350    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:15.007303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:15.040020    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.040100    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:15.043303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:15.073502    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.073502    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:15.077944    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:15.106871    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.106871    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:15.110453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:15.138071    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.138095    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:15.141547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:15.171602    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.171659    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:15.175341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:15.207140    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.207181    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:15.210547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:15.243222    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.243222    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:15.247103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:15.280156    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.280232    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:15.280232    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:15.280232    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:15.342862    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:15.342862    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:15.384022    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:15.384022    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:15.469724    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:15.469766    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:15.469807    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:15.497606    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:15.497667    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:18.064895    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:18.090410    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:18.123378    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.123429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:18.127331    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:18.157210    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.157210    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:18.160924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:18.191242    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.191242    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:18.195064    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:18.222561    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.222561    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:18.226125    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:18.255891    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.255891    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:18.259860    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:18.288868    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.288868    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:18.292834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:18.322668    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.322668    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:18.325666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:18.353052    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.353052    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:18.353052    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:18.353052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:18.418504    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:18.418504    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:18.457348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:18.457348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:18.568946    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:18.569003    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:18.569003    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:18.602236    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:18.602236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:21.158752    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:21.184475    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:21.214582    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.214582    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:21.218375    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:21.245604    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.245604    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:21.249850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:21.281360    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.281360    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:21.286501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:21.318549    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.318601    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:21.322609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:21.353429    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.353460    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:21.357031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:21.391028    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.391028    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:21.394206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:21.423584    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.423584    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:21.427599    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:21.458683    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.458683    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:21.458683    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:21.458683    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:21.526430    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:21.526430    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:21.565490    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:21.565490    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:21.656323    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:21.656323    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:21.656323    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:21.689700    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:21.689700    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:24.246630    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:24.280925    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:24.322972    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.322972    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:24.326768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:24.355732    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.355732    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:24.359957    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:24.391937    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.392009    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:24.395559    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:24.427388    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.427388    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:24.431126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:24.459891    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.459966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:24.463468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:24.491009    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.491009    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:24.494465    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:24.524468    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.524468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:24.528017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:24.568815    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.568815    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:24.568815    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:24.568815    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:24.632772    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:24.632772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:24.671731    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:24.671731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:24.755604    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:24.755604    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:24.755604    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:24.784599    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:24.784660    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:27.338272    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:27.366367    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:27.395715    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.395715    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:27.399158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:27.427362    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.427362    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:27.430752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:27.461990    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.461990    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:27.465748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:27.492985    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.492985    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:27.497176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:27.528724    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.528724    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:27.532970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:27.571655    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.571655    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:27.575466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:27.604007    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.604068    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:27.608062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:27.639624    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.639689    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:27.639735    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:27.639735    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:27.705896    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:27.705896    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:27.745294    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:27.745294    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:27.827462    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:27.827462    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:27.827462    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:27.854463    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:27.854559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:30.412283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:30.438474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:30.469848    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.469848    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:30.473330    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:30.501713    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.501713    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:30.505748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:30.535870    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.535870    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:30.540177    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:30.572310    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.572310    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:30.576644    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:30.607087    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.607087    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:30.610334    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:30.640168    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.640168    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:30.643628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:30.671132    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.671132    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:30.677927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:30.708536    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.708536    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:30.708536    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:30.708536    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:30.773222    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:30.773222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:30.812763    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:30.812763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:30.932347    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:30.932397    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:30.932444    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:30.961663    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:30.961663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.524404    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:33.548624    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:33.580753    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.580845    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:33.583912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:33.613001    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.613048    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:33.616808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:33.645262    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.645262    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:33.649044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:33.677477    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.677562    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:33.681205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:33.710607    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.710669    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:33.714600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:33.742889    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.742889    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:33.746623    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:33.777022    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.777022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:33.780455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:33.809525    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.809525    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:33.809525    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:33.809525    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.860852    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:33.860936    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:33.924768    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:33.924768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:33.962632    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:33.962632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:34.054124    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:34.054124    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:34.054124    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.589465    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:36.617658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:36.652432    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.652432    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:36.656189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:36.694709    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.694709    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:36.700040    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:36.729913    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.729913    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:36.733870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:36.762591    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.762591    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:36.766493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:36.796414    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.796414    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:36.800540    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:36.828148    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.828148    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:36.833323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:36.862390    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.862390    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:36.866173    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:36.895727    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.895814    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:36.895814    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:36.895814    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.926240    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:36.926240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:36.975760    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:36.975760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:37.036350    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:37.036350    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:37.072745    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:37.072745    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:37.161612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:39.667288    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:39.691212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:39.724148    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.724148    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:39.727935    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:39.761821    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.761821    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:39.765852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:39.793659    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.793696    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:39.797422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:39.825439    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.825473    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:39.828751    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:39.859011    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.859011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:39.862518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:39.891552    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.891613    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:39.894978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:39.926857    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.926857    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:39.930363    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:39.975835    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.975835    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:39.975835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:39.975835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:40.070107    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:40.070107    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:40.070107    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:40.098563    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:40.098605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:40.147476    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:40.147476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:40.212702    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:40.212702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:42.757339    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:42.786178    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:42.817429    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.817429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:42.821164    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:42.850363    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.850415    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:42.854031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:42.881774    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.881774    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:42.885802    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:42.915556    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.915556    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:42.919184    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:42.948329    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.948329    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:42.952430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:42.982355    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.982355    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:42.986768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:43.017700    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.017700    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:43.021284    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:43.052749    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.052779    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:43.052779    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:43.052813    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:43.091605    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:43.091605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:43.175861    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:43.175861    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:43.175861    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:43.204569    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:43.204569    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:43.257132    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:43.257132    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:45.825092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:45.853653    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:45.886780    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.886780    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:45.890416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:45.921840    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.923184    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:45.928382    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:45.960187    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.960252    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:45.963959    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:45.993658    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.993712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:45.997113    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:46.024308    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.024308    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:46.027994    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:46.060725    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.060725    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:46.064446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:46.092825    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.092825    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:46.098256    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:46.129614    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.129688    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:46.129688    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:46.129688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:46.216242    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:46.216263    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:46.216263    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:46.248767    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:46.248767    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:46.298044    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:46.298044    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:46.363186    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:46.363186    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:48.911992    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:48.946588    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:48.983880    6296 logs.go:282] 0 containers: []
	W1217 02:13:48.983880    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:48.987999    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:49.017254    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.017254    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:49.021239    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:49.053619    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.053619    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:49.057711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:49.086289    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.086289    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:49.090230    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:49.123069    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.123069    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:49.130107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:49.158724    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.158724    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:49.162733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:49.193515    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.193573    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:49.197116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:49.230153    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.230201    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:49.230245    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:49.230245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:49.259747    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:49.259747    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:49.312360    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:49.312456    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:49.375035    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:49.375035    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:49.413908    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:49.413908    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:49.508187    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:52.012834    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:52.037104    6296 out.go:203] 
	W1217 02:13:52.039462    6296 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:13:52.039520    6296 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:13:52.039588    6296 out.go:285] * Related issues:
	W1217 02:13:52.039588    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:13:52.039635    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:13:52.041923    6296 out.go:203] 
	
	
	==> Docker <==
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700732008Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700826718Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700839319Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700844420Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700849520Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700872823Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.700996336Z" level=info msg="Initializing buildkit"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.801833124Z" level=info msg="Completed buildkit initialization"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807448530Z" level=info msg="Daemon has completed initialization"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807644551Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807743662Z" level=info msg="API listen on [::]:2376"
	Dec 17 02:07:46 newest-cni-383500 dockerd[929]: time="2025-12-17T02:07:46.807662953Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 02:07:46 newest-cni-383500 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 02:07:47 newest-cni-383500 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Loaded network plugin cni"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 02:07:47 newest-cni-383500 cri-dockerd[1223]: time="2025-12-17T02:07:47Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 02:07:47 newest-cni-383500 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:14:09.906206   20028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:09.907263   20028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:09.908872   20028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:09.910020   20028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:14:09.911309   20028 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.752411] CPU: 12 PID: 469779 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8b9b910b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f8b9b910af6.
	[  +0.000001] RSP: 002b:00007fffc85e9670 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.875329] CPU: 10 PID: 469916 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fdfac8dab20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fdfac8daaf6.
	[  +0.000001] RSP: 002b:00007ffd587a0060 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:14:09 up  2:33,  0 user,  load average: 0.98, 0.93, 2.05
	Linux newest-cni-383500 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:14:06 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:06 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7.
	Dec 17 02:14:06 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:06 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:07 newest-cni-383500 kubelet[19839]: E1217 02:14:07.128660   19839 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:07 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:07 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:07 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8.
	Dec 17 02:14:07 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:07 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:08 newest-cni-383500 kubelet[19866]: E1217 02:14:08.049049   19866 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:08 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:08 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:08 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9.
	Dec 17 02:14:08 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:08 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:08 newest-cni-383500 kubelet[19894]: E1217 02:14:08.803339   19894 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:08 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:08 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:14:09 newest-cni-383500 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10.
	Dec 17 02:14:09 newest-cni-383500 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:09 newest-cni-383500 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:14:09 newest-cni-383500 kubelet[19917]: E1217 02:14:09.556630   19917 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:14:09 newest-cni-383500 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:14:09 newest-cni-383500 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p newest-cni-383500 -n newest-cni-383500: exit status 2 (569.9138ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "newest-cni-383500" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (13.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (215.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E1217 02:20:33.776404    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:20:38.689672    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:21:00.975396    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\default-k8s-diff-port-278200\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:21:05.638163    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:21:52.428528    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:22:33.957477    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:23:04.340148    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\bridge-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:23:07.215833    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 02:23:14.186733    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:23:25.495421    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
E1217 02:23:46.410142    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kubenet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://127.0.0.1:63565/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": EOF
helpers_test.go:338: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
start_stop_delete_test.go:285: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:285: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
start_stop_delete_test.go:285: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 2 (602.289ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:285: status error: exit status 2 (may be ok)
start_stop_delete_test.go:285: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:286: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-184000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:289: (dbg) Non-zero exit: kubectl --context no-preload-184000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (0s)
start_stop_delete_test.go:291: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-184000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:295: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:223: -----------------------post-mortem--------------------------------
helpers_test.go:224: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: network settings <======
helpers_test.go:231: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:239: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: docker inspect <======
helpers_test.go:240: (dbg) Run:  docker inspect no-preload-184000
helpers_test.go:244: (dbg) docker inspect no-preload-184000:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed",
	        "Created": "2025-12-17T01:54:01.802457191Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 454689,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-12-17T02:05:04.431751717Z",
	            "FinishedAt": "2025-12-17T02:05:01.217443908Z"
	        },
	        "Image": "sha256:2e44aac5cae5bb6b68b129ed5c85e80a5c1aac07706537d46ba12326f0e5c3cf",
	        "ResolvConfPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hostname",
	        "HostsPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/hosts",
	        "LogPath": "/var/lib/docker/containers/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed/335cbfb80690dc2a0b5190ce927015dabd8a2a79432d4a692db43c5d7fc7a5ed-json.log",
	        "Name": "/no-preload-184000",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-184000:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "no-preload-184000",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "0"
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 3221225472,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180-init/diff:/var/lib/docker/overlay2/05b9322702cd2ca45555e0c2edc7fd8f7cbd757a3add6e8a8d520dafe491f420/diff",
	                "MergedDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/merged",
	                "UpperDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/diff",
	                "WorkDir": "/var/lib/docker/overlay2/4d612f92454c0006074b16248737d20a391d8b1a144d64b9394108363f9d6180/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-184000",
	                "Source": "/var/lib/docker/volumes/no-preload-184000/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-184000",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-184000",
	                "name.minikube.sigs.k8s.io": "no-preload-184000",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "cd75d9fe5c78c005b0249a246e3b62cf2a8873f5a0bf590eec1667b2401d46f3",
	            "SandboxKey": "/var/run/docker/netns/cd75d9fe5c78",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63566"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63567"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63568"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63569"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "63565"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-184000": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:5e:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6adb91d102dfa92bfa154127e93e39401be06a5d21df5043f3e85e012e93e321",
	                    "EndpointID": "2717bfe6e1d6a16c3b3b21a01d0c25052321fa1d05a920cee0a218e0ea604d53",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-184000",
	                        "335cbfb80690"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:248: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 2 (586.564ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:248: status error: exit status 2 (may be ok)
helpers_test.go:253: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:254: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:256: (dbg) Run:  out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25
helpers_test.go:256: (dbg) Done: out/minikube-windows-amd64.exe -p no-preload-184000 logs -n 25: (1.6830005s)
helpers_test.go:261: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                            ARGS                                                                                                            │           PROFILE            │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ delete  │ -p old-k8s-version-044000                                                                                                                                                                                                  │ old-k8s-version-044000       │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │ 17 Dec 25 01:56 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:56 UTC │                     │
	│ image   │ embed-certs-653800 image list --format=json                                                                                                                                                                                │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p embed-certs-653800 --alsologtostderr -v=1                                                                                                                                                                               │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p embed-certs-653800                                                                                                                                                                                                      │ embed-certs-653800           │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ image   │ default-k8s-diff-port-278200 image list --format=json                                                                                                                                                                      │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ pause   │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ unpause │ -p default-k8s-diff-port-278200 --alsologtostderr -v=1                                                                                                                                                                     │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ delete  │ -p default-k8s-diff-port-278200                                                                                                                                                                                            │ default-k8s-diff-port-278200 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 01:57 UTC │ 17 Dec 25 01:57 UTC │
	│ addons  │ enable metrics-server -p no-preload-184000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:03 UTC │                     │
	│ stop    │ -p no-preload-184000 --alsologtostderr -v=3                                                                                                                                                                                │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ addons  │ enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │ 17 Dec 25 02:05 UTC │
	│ start   │ -p no-preload-184000 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker --kubernetes-version=v1.35.0-beta.0                                                                                       │ no-preload-184000            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-383500 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                    │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:05 UTC │                     │
	│ stop    │ -p newest-cni-383500 --alsologtostderr -v=3                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ addons  │ enable dashboard -p newest-cni-383500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                               │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │ 17 Dec 25 02:07 UTC │
	│ start   │ -p newest-cni-383500 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker --kubernetes-version=v1.35.0-beta.0 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:07 UTC │                     │
	│ image   │ newest-cni-383500 image list --format=json                                                                                                                                                                                 │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ pause   │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:13 UTC │ 17 Dec 25 02:13 UTC │
	│ unpause │ -p newest-cni-383500 --alsologtostderr -v=1                                                                                                                                                                                │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	│ delete  │ -p newest-cni-383500                                                                                                                                                                                                       │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	│ delete  │ -p newest-cni-383500                                                                                                                                                                                                       │ newest-cni-383500            │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 02:14 UTC │ 17 Dec 25 02:14 UTC │
	└─────────┴────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 02:07:37
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 02:07:37.336708    6296 out.go:360] Setting OutFile to fd 968 ...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.380113    6296 out.go:374] Setting ErrFile to fd 1700...
	I1217 02:07:37.380113    6296 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 02:07:37.394455    6296 out.go:368] Setting JSON to false
	I1217 02:07:37.396490    6296 start.go:133] hostinfo: {"hostname":"minikube4","uptime":8845,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 02:07:37.397485    6296 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 02:07:37.401853    6296 out.go:179] * [newest-cni-383500] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 02:07:37.405009    6296 notify.go:221] Checking for updates...
	I1217 02:07:37.407761    6296 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:37.412054    6296 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 02:07:37.415031    6296 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 02:07:37.416942    6296 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 02:07:37.418887    6296 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 02:07:37.439676    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:37.422499    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:37.422499    6296 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 02:07:37.541250    6296 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 02:07:37.544536    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:37.790862    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:37.763465755 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:37.793941    6296 out.go:179] * Using the docker driver based on existing profile
	I1217 02:07:37.795944    6296 start.go:309] selected driver: docker
	I1217 02:07:37.795944    6296 start.go:927] validating driver "docker" against &{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9
PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:37.796941    6296 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 02:07:37.881125    6296 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 02:07:38.106129    6296 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:67 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 02:07:38.085504737 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 02:07:38.106129    6296 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1217 02:07:38.106129    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:38.106661    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:38.106789    6296 start.go:353] cluster config:
	{Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:38.110370    6296 out.go:179] * Starting "newest-cni-383500" primary control-plane node in "newest-cni-383500" cluster
	I1217 02:07:38.113499    6296 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 02:07:38.115628    6296 out.go:179] * Pulling base image v0.0.48-1765661130-22141 ...
	I1217 02:07:38.118799    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:38.118867    6296 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 02:07:38.118972    6296 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 02:07:38.119036    6296 cache.go:65] Caching tarball of preloaded images
	I1217 02:07:38.119094    6296 preload.go:238] Found C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I1217 02:07:38.119094    6296 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 02:07:38.119094    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.197259    6296 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon, skipping pull
	I1217 02:07:38.197259    6296 cache.go:158] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in daemon, skipping load
	I1217 02:07:38.197259    6296 cache.go:243] Successfully downloaded all kic artifacts
	I1217 02:07:38.197259    6296 start.go:360] acquireMachinesLock for newest-cni-383500: {Name:mk34ae41921c4a11acc2a38ede8796b825a35934 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1217 02:07:38.197259    6296 start.go:364] duration metric: took 0s to acquireMachinesLock for "newest-cni-383500"
	I1217 02:07:38.197259    6296 start.go:96] Skipping create...Using existing machine configuration
	I1217 02:07:38.197259    6296 fix.go:54] fixHost starting: 
	I1217 02:07:38.204499    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.259240    6296 fix.go:112] recreateIfNeeded on newest-cni-383500: state=Stopped err=<nil>
	W1217 02:07:38.259240    6296 fix.go:138] unexpected machine state, will restart: <nil>
	I1217 02:07:38.262335    6296 out.go:252] * Restarting existing docker container for "newest-cni-383500" ...
	I1217 02:07:38.265716    6296 cli_runner.go:164] Run: docker start newest-cni-383500
	I1217 02:07:38.804123    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:38.863188    6296 kic.go:430] container "newest-cni-383500" state is running.
	I1217 02:07:38.868900    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:38.924169    6296 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\config.json ...
	I1217 02:07:38.926083    6296 machine.go:94] provisionDockerMachine start ...
	I1217 02:07:38.928987    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:38.984001    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:38.984993    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:38.984993    6296 main.go:143] libmachine: About to run SSH command:
	hostname
	I1217 02:07:38.986003    6296 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: EOF
	I1217 02:07:42.161557    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.161646    6296 ubuntu.go:182] provisioning hostname "newest-cni-383500"
	I1217 02:07:42.166827    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.231443    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.231698    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.231698    6296 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-383500 && echo "newest-cni-383500" | sudo tee /etc/hostname
	I1217 02:07:42.423907    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-383500
	
	I1217 02:07:42.432743    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.491085    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:42.491085    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:42.491085    6296 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-383500' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-383500/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-383500' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1217 02:07:42.667009    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:42.667009    6296 ubuntu.go:188] set auth options {CertDir:C:\Users\jenkins.minikube4\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube4\minikube-integration\.minikube}
	I1217 02:07:42.667009    6296 ubuntu.go:190] setting up certificates
	I1217 02:07:42.667009    6296 provision.go:84] configureAuth start
	I1217 02:07:42.671320    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:42.724474    6296 provision.go:143] copyHostCerts
	I1217 02:07:42.725072    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem, removing ...
	I1217 02:07:42.725072    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.pem
	I1217 02:07:42.725072    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/ca.pem (1078 bytes)
	I1217 02:07:42.726229    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem, removing ...
	I1217 02:07:42.726229    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cert.pem
	I1217 02:07:42.726812    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/cert.pem (1123 bytes)
	I1217 02:07:42.727386    6296 exec_runner.go:144] found C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem, removing ...
	I1217 02:07:42.727386    6296 exec_runner.go:203] rm: C:\Users\jenkins.minikube4\minikube-integration\.minikube\key.pem
	I1217 02:07:42.727386    6296 exec_runner.go:151] cp: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube4\minikube-integration\.minikube/key.pem (1675 bytes)
	I1217 02:07:42.728644    6296 provision.go:117] generating server cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.newest-cni-383500 san=[127.0.0.1 192.168.76.2 localhost minikube newest-cni-383500]
	I1217 02:07:42.882778    6296 provision.go:177] copyRemoteCerts
	I1217 02:07:42.886667    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1217 02:07:42.889412    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:42.946034    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:43.080244    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1217 02:07:43.111350    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1220 bytes)
	I1217 02:07:43.145228    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1217 02:07:43.176328    6296 provision.go:87] duration metric: took 509.312ms to configureAuth
	I1217 02:07:43.176328    6296 ubuntu.go:206] setting minikube options for container-runtime
	I1217 02:07:43.176328    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:43.180705    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.236378    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.237514    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.237514    6296 main.go:143] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I1217 02:07:43.404492    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: overlay
	
	I1217 02:07:43.404492    6296 ubuntu.go:71] root file system type: overlay
	I1217 02:07:43.405056    6296 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I1217 02:07:43.408624    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.465282    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.465408    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.465408    6296 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 \
		-H fd:// --containerd=/run/containerd/containerd.sock \
		-H unix:///var/run/docker.sock \
		--default-ulimit=nofile=1048576:1048576 \
		--tlsverify \
		--tlscacert /etc/docker/ca.pem \
		--tlscert /etc/docker/server.pem \
		--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I1217 02:07:43.658319    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network-online.target nss-lookup.target docker.socket firewalld.service containerd.service time-set.target
	Wants=network-online.target containerd.service
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=always
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 	-H fd:// --containerd=/run/containerd/containerd.sock 	-H unix:///var/run/docker.sock 	--default-ulimit=nofile=1048576:1048576 	--tlsverify 	--tlscacert /etc/docker/ca.pem 	--tlscert /etc/docker/server.pem 	--tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	OOMScoreAdjust=-500
	
	[Install]
	WantedBy=multi-user.target
	
	I1217 02:07:43.662395    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.719191    6296 main.go:143] libmachine: Using SSH client type: native
	I1217 02:07:43.719552    6296 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x7ff6b94ffd00] 0x7ff6b9502860 <nil>  [] 0s} 127.0.0.1 63782 <nil> <nil>}
	I1217 02:07:43.719552    6296 main.go:143] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I1217 02:07:43.890999    6296 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1217 02:07:43.890999    6296 machine.go:97] duration metric: took 4.9648419s to provisionDockerMachine
	I1217 02:07:43.890999    6296 start.go:293] postStartSetup for "newest-cni-383500" (driver="docker")
	I1217 02:07:43.890999    6296 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1217 02:07:43.895385    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1217 02:07:43.899109    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:43.952181    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.085157    6296 ssh_runner.go:195] Run: cat /etc/os-release
	I1217 02:07:44.092998    6296 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1217 02:07:44.093086    6296 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1217 02:07:44.093086    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\addons for local assets ...
	I1217 02:07:44.093465    6296 filesync.go:126] Scanning C:\Users\jenkins.minikube4\minikube-integration\.minikube\files for local assets ...
	I1217 02:07:44.094379    6296 filesync.go:149] local asset: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem -> 41682.pem in /etc/ssl/certs
	I1217 02:07:44.099969    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1217 02:07:44.115031    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /etc/ssl/certs/41682.pem (1708 bytes)
	I1217 02:07:44.146317    6296 start.go:296] duration metric: took 255.2637ms for postStartSetup
	I1217 02:07:44.150381    6296 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 02:07:44.153098    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.206142    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.337637    6296 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1217 02:07:44.346313    6296 fix.go:56] duration metric: took 6.1489614s for fixHost
	I1217 02:07:44.346313    6296 start.go:83] releasing machines lock for "newest-cni-383500", held for 6.1489614s
	I1217 02:07:44.350643    6296 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-383500
	I1217 02:07:44.409164    6296 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I1217 02:07:44.413957    6296 ssh_runner.go:195] Run: cat /version.json
	I1217 02:07:44.414540    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.416694    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:44.466739    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:44.469418    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	W1217 02:07:44.591848    6296 start.go:869] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I1217 02:07:44.598090    6296 ssh_runner.go:195] Run: systemctl --version
	I1217 02:07:44.614283    6296 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1217 02:07:44.624324    6296 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1217 02:07:44.628955    6296 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1217 02:07:44.642200    6296 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1217 02:07:44.642243    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:44.642333    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:44.642453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:44.671216    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10.1"|' /etc/containerd/config.toml"
	I1217 02:07:44.689408    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I1217 02:07:44.702919    6296 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I1217 02:07:44.707856    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I1217 02:07:44.727869    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.747180    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	W1217 02:07:44.751020    6296 out.go:285] ! Failing to connect to https://registry.k8s.io/ from inside the minikube container
	W1217 02:07:44.751020    6296 out.go:285] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I1217 02:07:44.766866    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I1217 02:07:44.786853    6296 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1217 02:07:44.806986    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I1217 02:07:44.828346    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I1217 02:07:44.848400    6296 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I1217 02:07:44.870349    6296 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1217 02:07:44.887217    6296 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1217 02:07:44.905216    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.047629    6296 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I1217 02:07:45.203749    6296 start.go:496] detecting cgroup driver to use...
	I1217 02:07:45.203842    6296 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1217 02:07:45.209421    6296 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I1217 02:07:45.236823    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.259331    6296 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1217 02:07:45.337368    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1217 02:07:45.361492    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I1217 02:07:45.381383    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I1217 02:07:45.409600    6296 ssh_runner.go:195] Run: which cri-dockerd
	I1217 02:07:45.421762    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I1217 02:07:45.435668    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (192 bytes)
	I1217 02:07:45.461708    6296 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I1217 02:07:45.616228    6296 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I1217 02:07:45.751670    6296 docker.go:575] configuring docker to use "cgroupfs" as cgroup driver...
	I1217 02:07:45.751670    6296 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I1217 02:07:45.778504    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed docker
	I1217 02:07:45.800985    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:45.956342    6296 ssh_runner.go:195] Run: sudo systemctl restart docker
	I1217 02:07:46.816501    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1217 02:07:46.840410    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I1217 02:07:46.865817    6296 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I1217 02:07:46.890943    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:46.914319    6296 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I1217 02:07:47.058242    6296 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I1217 02:07:47.214522    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.355565    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	W1217 02:07:47.472644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:47.382801    6296 ssh_runner.go:195] Run: sudo systemctl reset-failed cri-docker.service
	I1217 02:07:47.407455    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:47.558893    6296 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I1217 02:07:47.666138    6296 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I1217 02:07:47.686246    6296 start.go:543] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I1217 02:07:47.690618    6296 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I1217 02:07:47.697013    6296 start.go:564] Will wait 60s for crictl version
	I1217 02:07:47.702316    6296 ssh_runner.go:195] Run: which crictl
	I1217 02:07:47.713878    6296 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1217 02:07:47.755301    6296 start.go:580] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  29.1.3
	RuntimeApiVersion:  v1
	I1217 02:07:47.758809    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.803772    6296 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I1217 02:07:47.845573    6296 out.go:252] * Preparing Kubernetes v1.35.0-beta.0 on Docker 29.1.3 ...
	I1217 02:07:47.849368    6296 cli_runner.go:164] Run: docker exec -t newest-cni-383500 dig +short host.docker.internal
	I1217 02:07:47.978778    6296 network.go:96] got host ip for mount in container by digging dns: 192.168.65.254
	I1217 02:07:47.983162    6296 ssh_runner.go:195] Run: grep 192.168.65.254	host.minikube.internal$ /etc/hosts
	I1217 02:07:47.993198    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.65.254	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.011887    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:48.072090    6296 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1217 02:07:48.073820    6296 kubeadm.go:884] updating cluster {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L
MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1217 02:07:48.073820    6296 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 02:07:48.077080    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.110342    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.110411    6296 docker.go:621] Images already preloaded, skipping extraction
	I1217 02:07:48.113821    6296 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I1217 02:07:48.144461    6296 docker.go:691] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.35.0-beta.0
	registry.k8s.io/kube-scheduler:v1.35.0-beta.0
	registry.k8s.io/kube-proxy:v1.35.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
	registry.k8s.io/coredns/coredns:v1.13.1
	registry.k8s.io/etcd:3.6.5-0
	registry.k8s.io/pause:3.10.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I1217 02:07:48.144530    6296 cache_images.go:86] Images are preloaded, skipping loading
	I1217 02:07:48.144530    6296 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.35.0-beta.0 docker true true} ...
	I1217 02:07:48.144779    6296 kubeadm.go:947] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.35.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=newest-cni-383500 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1217 02:07:48.149102    6296 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I1217 02:07:48.225894    6296 cni.go:84] Creating CNI manager for ""
	I1217 02:07:48.225894    6296 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 02:07:48.225894    6296 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1217 02:07:48.225894    6296 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.35.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-383500 NodeName:newest-cni-383500 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1217 02:07:48.226504    6296 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "newest-cni-383500"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.35.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1217 02:07:48.230913    6296 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.35.0-beta.0
	I1217 02:07:48.243749    6296 binaries.go:51] Found k8s binaries, skipping transfer
	I1217 02:07:48.248634    6296 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1217 02:07:48.262382    6296 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (323 bytes)
	I1217 02:07:48.284386    6296 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I1217 02:07:48.306623    6296 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2223 bytes)
	I1217 02:07:48.332101    6296 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1217 02:07:48.341865    6296 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1217 02:07:48.360919    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:48.498620    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:48.520308    6296 certs.go:69] Setting up C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500 for IP: 192.168.76.2
	I1217 02:07:48.520346    6296 certs.go:195] generating shared ca certs ...
	I1217 02:07:48.520390    6296 certs.go:227] acquiring lock for ca certs: {Name:mk92285f7546e1a5b3c3b23dab6135aa5a99cd14 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:48.520420    6296 certs.go:236] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key
	I1217 02:07:48.521152    6296 certs.go:236] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key
	I1217 02:07:48.521359    6296 certs.go:257] generating profile certs ...
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\client.key
	I1217 02:07:48.521695    6296 certs.go:360] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key.c9c9b4b8
	I1217 02:07:48.522472    6296 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key
	I1217 02:07:48.523217    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem (1338 bytes)
	W1217 02:07:48.523515    6296 certs.go:480] ignoring C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168_empty.pem, impossibly tiny 0 bytes
	I1217 02:07:48.523598    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca-key.pem (1675 bytes)
	I1217 02:07:48.523888    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I1217 02:07:48.524140    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I1217 02:07:48.524399    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\key.pem (1675 bytes)
	I1217 02:07:48.525045    6296 certs.go:484] found cert: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem (1708 bytes)
	I1217 02:07:48.526649    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1217 02:07:48.558725    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1217 02:07:48.590333    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1217 02:07:48.621493    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1217 02:07:48.650907    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1217 02:07:48.678948    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1217 02:07:48.708871    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1217 02:07:48.738822    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\newest-cni-383500\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1217 02:07:48.769873    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\certs\4168.pem --> /usr/share/ca-certificates/4168.pem (1338 bytes)
	I1217 02:07:48.801411    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\ssl\certs\41682.pem --> /usr/share/ca-certificates/41682.pem (1708 bytes)
	I1217 02:07:48.828208    6296 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1217 02:07:48.859551    6296 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1217 02:07:48.888197    6296 ssh_runner.go:195] Run: openssl version
	I1217 02:07:48.903194    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.920018    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/41682.pem /etc/ssl/certs/41682.pem
	I1217 02:07:48.936734    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.943690    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Dec 17 00:23 /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.948571    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/41682.pem
	I1217 02:07:48.997651    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/3ec20f2e.0
	I1217 02:07:49.015514    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.035513    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem
	I1217 02:07:49.056511    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.065394    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 17 00:07 /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.070742    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1217 02:07:49.117805    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/b5213941.0
	I1217 02:07:49.140198    6296 ssh_runner.go:195] Run: sudo test -s /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.156992    6296 ssh_runner.go:195] Run: sudo ln -fs /usr/share/ca-certificates/4168.pem /etc/ssl/certs/4168.pem
	I1217 02:07:49.175485    6296 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.184194    6296 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Dec 17 00:23 /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.187479    6296 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/4168.pem
	I1217 02:07:49.237543    6296 ssh_runner.go:195] Run: sudo test -L /etc/ssl/certs/51391683.0
	I1217 02:07:49.254809    6296 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1217 02:07:49.269508    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1217 02:07:49.317073    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1217 02:07:49.365797    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1217 02:07:49.413853    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1217 02:07:49.462871    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1217 02:07:49.515512    6296 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1217 02:07:49.558666    6296 kubeadm.go:401] StartCluster: {Name:newest-cni-383500 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:newest-cni-383500 Namespace:default APIServerHAVIP: APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L Mo
untGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 02:07:49.563317    6296 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I1217 02:07:49.602899    6296 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1217 02:07:49.616365    6296 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1217 02:07:49.616365    6296 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1217 02:07:49.622022    6296 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1217 02:07:49.637152    6296 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1217 02:07:49.641090    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.693295    6296 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-383500" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.693843    6296 kubeconfig.go:62] C:\Users\jenkins.minikube4\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-383500" cluster setting kubeconfig missing "newest-cni-383500" context setting]
	I1217 02:07:49.694722    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.716755    6296 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1217 02:07:49.731850    6296 kubeadm.go:635] The running cluster does not require reconfiguration: 127.0.0.1
	I1217 02:07:49.731850    6296 kubeadm.go:602] duration metric: took 115.4836ms to restartPrimaryControlPlane
	I1217 02:07:49.731850    6296 kubeadm.go:403] duration metric: took 173.1816ms to StartCluster
	I1217 02:07:49.731850    6296 settings.go:142] acquiring lock: {Name:mk5d8710830d010adb6db61f855b0ef766a8622c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.731850    6296 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 02:07:49.732839    6296 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\kubeconfig: {Name:mk97c09b788e5010ffd4c9dd9525f9245d5edd25 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 02:07:49.734654    6296 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I1217 02:07:49.734654    6296 addons.go:527] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1217 02:07:49.734654    6296 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:70] Setting dashboard=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 config.go:182] Loaded profile config "newest-cni-383500": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 02:07:49.734654    6296 addons.go:70] Setting default-storageclass=true in profile "newest-cni-383500"
	I1217 02:07:49.734654    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.734654    6296 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-383500"
	I1217 02:07:49.734654    6296 addons.go:239] Setting addon dashboard=true in "newest-cni-383500"
	W1217 02:07:49.734654    6296 addons.go:248] addon dashboard should already be in state true
	I1217 02:07:49.735179    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.739634    6296 out.go:179] * Verifying Kubernetes components...
	I1217 02:07:49.743427    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.744378    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.745812    6296 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1217 02:07:49.809135    6296 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1217 02:07:49.809532    6296 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1217 02:07:49.812989    6296 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:49.812989    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1217 02:07:49.816981    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.817010    6296 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1217 02:07:49.818467    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1217 02:07:49.818467    6296 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1217 02:07:49.823270    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.824987    6296 addons.go:239] Setting addon default-storageclass=true in "newest-cni-383500"
	I1217 02:07:49.825100    6296 host.go:66] Checking if "newest-cni-383500" exists ...
	I1217 02:07:49.836645    6296 cli_runner.go:164] Run: docker container inspect newest-cni-383500 --format={{.State.Status}}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.881995    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.889991    6296 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:49.889991    6296 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1217 02:07:49.892991    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:49.925992    6296 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1217 02:07:49.943010    6296 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:63782 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\newest-cni-383500\id_rsa Username:docker}
	I1217 02:07:49.950996    6296 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" newest-cni-383500
	I1217 02:07:50.005058    6296 api_server.go:52] waiting for apiserver process to appear ...
	I1217 02:07:50.009064    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.011068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.014077    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1217 02:07:50.014077    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1217 02:07:50.034057    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1217 02:07:50.034057    6296 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1217 02:07:50.102553    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1217 02:07:50.102611    6296 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1217 02:07:50.106900    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.124027    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1217 02:07:50.124027    6296 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1217 02:07:50.189590    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1217 02:07:50.189677    6296 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	W1217 02:07:50.190082    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.190082    6296 retry.go:31] will retry after 343.200838ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.212250    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1217 02:07:50.212311    6296 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1217 02:07:50.231619    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1217 02:07:50.231619    6296 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	W1217 02:07:50.241078    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.241078    6296 retry.go:31] will retry after 338.608253ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.254747    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1217 02:07:50.254794    6296 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1217 02:07:50.277655    6296 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:50.277655    6296 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1217 02:07:50.303268    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.381205    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.381205    6296 retry.go:31] will retry after 204.689537ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.510673    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:50.538343    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.585518    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:50.590250    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:50.625635    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.625793    6296 retry.go:31] will retry after 198.686568ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.703247    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.703247    6296 retry.go:31] will retry after 199.792365ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.713669    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.714671    6296 retry.go:31] will retry after 441.125735ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.831068    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:07:50.910787    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:50.921027    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.921027    6296 retry.go:31] will retry after 637.088373ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:07:50.993148    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:50.993148    6296 retry.go:31] will retry after 819.774881ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.009768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.161082    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:51.282295    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.282369    6296 retry.go:31] will retry after 677.278565ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.510844    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:51.563702    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:51.642986    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.642986    6296 retry.go:31] will retry after 1.231128198s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.817677    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:51.902470    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.902470    6296 retry.go:31] will retry after 1.160161898s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:51.964724    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:52.009393    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:52.053520    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.053520    6296 retry.go:31] will retry after 497.775491ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.510530    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:52.556698    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:52.641425    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.641425    6296 retry.go:31] will retry after 893.419079ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.880811    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:52.961643    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:52.961643    6296 retry.go:31] will retry after 1.354718896s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.009905    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.068292    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:53.159843    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.159885    6296 retry.go:31] will retry after 830.811591ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.510300    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:53.539679    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:53.634195    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.634195    6296 retry.go:31] will retry after 1.875797166s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:53.997012    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:54.010116    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:54.085004    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.085004    6296 retry.go:31] will retry after 2.403477641s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.321510    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:54.401677    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.401677    6296 retry.go:31] will retry after 2.197762331s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:54.509750    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.011577    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.509949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:55.514301    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:07:55.590724    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:55.590724    6296 retry.go:31] will retry after 3.771224323s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.010995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:56.493760    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:07:56.509755    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:56.580067    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.580067    6296 retry.go:31] will retry after 2.862008002s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.606008    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:07:56.692846    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:56.693375    6296 retry.go:31] will retry after 3.419223727s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:57.009866    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:57.510945    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:07:57.510327    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.010333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:58.511391    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.013796    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:07:59.367655    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:07:59.447582    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:07:59.457416    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.457416    6296 retry.go:31] will retry after 6.254269418s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.510215    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:07:59.536524    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:07:59.536524    6296 retry.go:31] will retry after 4.240139996s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.010517    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:00.118263    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:00.197472    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.197472    6296 retry.go:31] will retry after 5.486941273s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:00.511349    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.012031    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:01.510877    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:02.510995    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.011372    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.511479    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:03.781390    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:03.867561    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:03.867561    6296 retry.go:31] will retry after 5.255488401s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:04.011296    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:04.510695    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.011055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.510174    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:05.690069    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:05.718147    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:05.792389    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.792389    6296 retry.go:31] will retry after 3.294946391s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:05.802187    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:05.802187    6296 retry.go:31] will retry after 6.599881974s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:06.010721    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:06.509941    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:07.010092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:07.543861    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:07.511303    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.011059    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:08.511015    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.009909    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:09.092821    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:09.127423    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:09.180638    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.180716    6296 retry.go:31] will retry after 13.056189647s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:09.211988    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.212069    6296 retry.go:31] will retry after 13.872512266s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:09.510829    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.010907    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:10.513112    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.010572    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:11.509543    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.010570    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:12.409071    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:12.497495    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.497495    6296 retry.go:31] will retry after 9.788092681s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:12.510004    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.011338    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:13.509984    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.010499    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:14.511126    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.010949    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:15.511741    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.011278    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:16.511157    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:17.010863    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:17.577088    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:17.511273    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.010782    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:18.510594    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.011193    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:19.512050    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.011700    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:20.511001    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.010461    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:21.510457    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.011002    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:22.242227    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1217 02:08:22.290434    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:22.384800    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.384884    6296 retry.go:31] will retry after 11.75975207s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:22.424758    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.424758    6296 retry.go:31] will retry after 15.557196078s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:22.510556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.011645    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:23.090496    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:23.176544    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.176625    6296 retry.go:31] will retry after 13.26458747s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:23.510872    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.011245    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:24.511483    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.011656    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:25.510967    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.012125    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:26.512672    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:27.011155    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:27.612061    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:27.512368    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.010889    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:28.511767    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.011035    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:29.512111    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.010919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:30.510464    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.010433    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:31.511392    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.010680    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:32.510963    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.011818    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:33.511638    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.011591    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:34.151810    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:34.242474    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.242474    6296 retry.go:31] will retry after 23.644538854s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:34.513602    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.011269    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:35.511142    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.011267    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:36.446774    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	I1217 02:08:36.511283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:36.541778    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:36.541860    6296 retry.go:31] will retry after 14.024805043s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:37.010743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:37.653192    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:37.510520    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:37.987959    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1217 02:08:38.011587    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:08:38.113276    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.113276    6296 retry.go:31] will retry after 20.609884455s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:38.511817    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.012624    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:39.511353    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.011079    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:40.511636    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.011582    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:41.512671    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.011503    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:42.511640    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.011054    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:43.510485    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.011395    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:44.511333    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.011435    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:45.513316    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.012600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:46.512307    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.012227    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:47.512888    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.011996    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:48.511276    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.011053    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:49.511776    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:50.011678    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:50.050889    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.050889    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:50.055201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:50.085770    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.085770    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:50.090316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:50.123762    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.123762    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:50.127529    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:50.157626    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.157626    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:50.163652    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:50.189945    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.189945    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:50.193620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:50.222819    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.222866    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:50.227818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:50.256909    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.256909    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:50.260970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:50.290387    6296 logs.go:282] 0 containers: []
	W1217 02:08:50.290387    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:50.290387    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:50.290387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:50.357876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:50.357876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:50.420098    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:50.420098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:50.460376    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:50.460376    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:50.542989    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:50.534097    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.535406    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.536541    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.537655    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:50.539165    3372 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:50.542989    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:50.542989    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:50.570331    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:08:50.645772    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:50.645772    6296 retry.go:31] will retry after 16.344343138s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:08:47.695483    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:08:53.075519    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:53.098924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:53.131675    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.131675    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:53.135542    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:53.166511    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.166511    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:53.170265    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:53.198547    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.198547    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:53.202694    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:53.232459    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.232459    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:53.235758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:53.263802    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.263802    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:53.268318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:53.296956    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.296956    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:53.301349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:53.331331    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.331331    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:53.335255    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:53.367520    6296 logs.go:282] 0 containers: []
	W1217 02:08:53.367550    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:53.367577    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:53.367602    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:53.453750    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:53.444459    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.445431    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.446930    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.448003    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:53.449000    3523 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:53.453837    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:53.453887    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:53.485058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:53.485058    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:53.540050    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:53.540050    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:53.604101    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:53.604101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.146858    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:56.172227    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:56.203897    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.203941    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:56.207562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:56.236114    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.236114    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:56.240341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:56.274958    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.274958    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:56.280577    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:56.308906    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.308906    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:56.312811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:56.340777    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.340836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:56.343843    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:56.371408    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.371441    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:56.374771    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:56.406487    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.406487    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:56.410973    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:56.441247    6296 logs.go:282] 0 containers: []
	W1217 02:08:56.441247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:56.441247    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:56.441247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:56.506877    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:56.506877    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:56.548841    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:56.548841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:56.633101    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:56.624778    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.625942    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.626969    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.628325    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:56.629359    3694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:08:56.633101    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:56.633101    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:56.659421    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:56.659457    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:57.892877    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:08:57.970838    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:57.970838    6296 retry.go:31] will retry after 27.385193451s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.728649    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:08:58.834139    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:58.834680    6296 retry.go:31] will retry after 32.13321777s: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	I1217 02:08:59.213728    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:08:59.238361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:08:59.266298    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.266298    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:08:59.270295    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:08:59.299414    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.299414    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:08:59.302581    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:08:59.335627    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.335627    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:08:59.339238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:08:59.367042    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.367042    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:08:59.371258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:08:59.401507    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.401507    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:08:59.405468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:08:59.436657    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.436657    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:08:59.440955    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:08:59.471027    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.471027    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:08:59.474047    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:08:59.505164    6296 logs.go:282] 0 containers: []
	W1217 02:08:59.505164    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:08:59.505164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:08:59.505164    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:08:59.533835    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:08:59.533835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:08:59.586695    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:08:59.587671    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:08:59.648841    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:08:59.648841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:08:59.688691    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:08:59.688691    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:08:59.777044    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:08:59.763261    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.764003    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.767722    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.770018    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:08:59.771065    3890 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.282707    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:02.307570    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:02.340326    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.340412    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:02.343993    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:02.374035    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.374079    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:02.377688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	W1217 02:08:57.736771    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:02.409724    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.409724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:02.414154    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:02.442993    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.442993    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:02.447591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:02.474966    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.474966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:02.479447    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:02.511675    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.511675    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:02.515939    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:02.544034    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.544034    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:02.548633    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:02.578196    6296 logs.go:282] 0 containers: []
	W1217 02:09:02.578196    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:02.578196    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:02.578196    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:02.642449    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:02.643420    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:02.681562    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:02.681562    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:02.766017    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:02.754951    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.756418    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.757119    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.759531    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:02.760553    4033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:02.766017    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:02.766017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:02.795166    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:02.795166    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.347132    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:05.372840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:05.424611    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.424686    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:05.428337    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:05.461682    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.461682    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:05.465790    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:05.495395    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.495395    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:05.499215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:05.528620    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.528620    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:05.532226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:05.560375    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.560375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:05.564119    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:05.595214    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.595214    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:05.600088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:05.633183    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.633183    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:05.636776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:05.664840    6296 logs.go:282] 0 containers: []
	W1217 02:09:05.664840    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:05.664840    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:05.664840    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:05.718503    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:05.718503    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:05.781489    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:05.781489    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:05.821081    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:05.821081    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:05.905451    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:05.896107    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.897043    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.898918    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.899910    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:05.901056    4222 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:05.905451    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:05.905451    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:06.996471    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml
	W1217 02:09:07.077056    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:07.077056    6296 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storageclass.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storageclass.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:08.443326    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:08.470285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:08.499191    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.499191    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:08.503346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:08.531727    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.531727    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:08.535874    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:08.567724    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.567724    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:08.571504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:08.601814    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.601814    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:08.605003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:08.638738    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.638815    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:08.642116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:08.672949    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.672949    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:08.676953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:08.706081    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.706145    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:08.709298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:08.737856    6296 logs.go:282] 0 containers: []
	W1217 02:09:08.737856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:08.737856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:08.737856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:08.798236    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:08.798236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:08.838053    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:08.838053    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:08.925271    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:08.915579    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.916804    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.917832    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.919242    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:08.920277    4377 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:08.925271    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:08.925271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:08.952860    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:08.952934    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.505032    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:11.532273    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:11.560855    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.560907    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:11.564808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:11.595967    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.596024    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:11.599911    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:11.628443    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.628443    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:11.632103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:11.659899    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.659899    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:11.663896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:11.695830    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.695864    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:11.699333    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:11.728245    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.728314    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:11.731834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:11.762004    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.762038    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:11.765497    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:11.800437    6296 logs.go:282] 0 containers: []
	W1217 02:09:11.800437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:11.800437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:11.800437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:11.850659    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:11.850659    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:11.927328    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:11.927328    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:11.968115    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:11.968115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:12.061366    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:12.049456    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.050395    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.051658    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.052989    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:12.055935    4550 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:12.061366    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:12.061366    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:07.775163    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:14.593463    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:14.619698    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:14.649625    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.649625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:14.653809    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:14.682807    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.682865    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:14.686225    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:14.716867    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.716867    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:14.720947    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:14.748712    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.748712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:14.753598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:14.786467    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.786467    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:14.790745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:14.820388    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.820388    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:14.824364    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:14.856683    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.856715    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:14.860387    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:14.907334    6296 logs.go:282] 0 containers: []
	W1217 02:09:14.907388    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:14.907388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:14.907388    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:14.970536    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:14.971543    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:15.009837    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:15.009837    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:15.100833    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:15.089537    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.090644    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.091541    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.092652    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:15.093429    4694 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:15.100833    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:15.100833    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:15.129774    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:15.129838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:17.687506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:17.711884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:17.740676    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.740676    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:17.743807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:17.775526    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.775598    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:17.779196    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:17.810564    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.810564    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:17.815366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:17.847149    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.847149    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:17.850304    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:17.880825    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.880825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:17.884416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:17.913663    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.913663    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:17.917519    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:17.949675    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.949736    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:17.953399    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:17.981777    6296 logs.go:282] 0 containers: []
	W1217 02:09:17.981777    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:17.981853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:17.981853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:18.045143    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:18.045143    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:18.085682    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:18.085682    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:18.174824    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:18.164839    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.166260    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.167755    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.169313    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:18.170543    4853 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:18.174862    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:18.174890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:18.201721    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:18.201721    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:20.754573    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:20.779418    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:20.815289    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.815336    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:20.821329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:20.849494    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.849566    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:20.853416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:20.886139    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.886213    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:20.890864    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:20.921623    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.921691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:20.925413    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:20.955001    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.955030    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:20.959115    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:20.986446    6296 logs.go:282] 0 containers: []
	W1217 02:09:20.986446    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:20.990622    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:21.019381    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.019903    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:21.023386    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:21.049708    6296 logs.go:282] 0 containers: []
	W1217 02:09:21.049708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:21.049708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:21.049708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:21.114512    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:21.114512    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:21.154312    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:21.154312    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:21.241835    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:21.232254    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.233191    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.235446    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.236247    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:21.238241    5013 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:21.241835    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:21.241835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:21.269935    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:21.269935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:17.811223    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:23.827385    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:23.851293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:23.884017    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.884017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:23.887852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:23.920819    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.920819    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:23.925124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:23.953397    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.953468    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:23.957090    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:23.987965    6296 logs.go:282] 0 containers: []
	W1217 02:09:23.987965    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:23.992238    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:24.021188    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.021188    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:24.027472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:24.059066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.059066    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:24.062927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:24.092066    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.092066    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:24.096083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:24.130020    6296 logs.go:282] 0 containers: []
	W1217 02:09:24.130083    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:24.130083    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:24.130083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:24.193264    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:24.193264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:24.233590    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:24.233590    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:24.334738    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:24.323376    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.324478    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.325163    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327407    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:24.327995    5169 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:24.334738    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:24.334738    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:24.361711    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:24.361711    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:25.361736    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml
	W1217 02:09:25.443830    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:25.443830    6296 out.go:285] ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error: error validating "/etc/kubernetes/addons/storage-provisioner.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:26.915928    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:26.940552    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:26.972265    6296 logs.go:282] 0 containers: []
	W1217 02:09:26.972334    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:26.975468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:27.004131    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.004131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:27.007688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:27.040755    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.040755    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:27.044298    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:27.075607    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.075607    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:27.079764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:27.109726    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.109726    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:27.113807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:27.142060    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.142060    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:27.145049    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:27.179827    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.179898    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:27.183340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:27.212340    6296 logs.go:282] 0 containers: []
	W1217 02:09:27.212340    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:27.212340    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:27.212340    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:27.290453    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:27.280957    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.282008    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.283593    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.284873    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:27.286226    5333 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:27.290453    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:27.290453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:27.317919    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:27.317919    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:27.372636    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:27.372636    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:27.434881    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:27.434881    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:29.980965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:30.007081    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:30.038766    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.038766    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:30.042837    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:30.074216    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.074277    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:30.077495    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:30.109815    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.109815    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:30.113543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:30.144692    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.144692    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:30.148595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:30.181530    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.181530    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:30.185056    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:30.230054    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.230054    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:30.233965    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:30.264421    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.264421    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:30.268191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:30.302463    6296 logs.go:282] 0 containers: []
	W1217 02:09:30.302463    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:30.302463    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:30.302463    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:30.369905    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:30.369905    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:30.407364    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:30.407364    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:30.501045    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:30.489137    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.491259    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.493208    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.494311    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:30.496063    5495 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:30.501045    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:30.501045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:30.529058    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:30.529119    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:30.973740    6296 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	W1217 02:09:31.053832    6296 addons.go:477] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	W1217 02:09:31.053832    6296 out.go:285] ! Enabling 'dashboard' returned an error: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl apply --force -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: Process exited with status 1
	stdout:
	
	stderr:
	error validating "/etc/kubernetes/addons/dashboard-ns.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrole.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-clusterrolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-configmap.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-dp.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-role.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-rolebinding.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-sa.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-secret.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	error validating "/etc/kubernetes/addons/dashboard-svc.yaml": error validating data: failed to download openapi: Get "https://localhost:8443/openapi/v2?timeout=32s": dial tcp [::1]:8443: connect: connection refused; if you choose to ignore these errors, turn validation off with --validate=false
	]
	I1217 02:09:31.057712    6296 out.go:179] * Enabled addons: 
	I1217 02:09:31.060716    6296 addons.go:530] duration metric: took 1m41.3245326s for enable addons: enabled=[]
	W1217 02:09:27.847902    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:33.093000    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:33.117479    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:33.148299    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.148299    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:33.152403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:33.180747    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.180747    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:33.184258    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:33.214319    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.214389    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:33.217921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:33.244463    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.244463    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:33.248882    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:33.280520    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.280573    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:33.284251    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:33.313836    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.313883    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:33.318949    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:33.351545    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.351545    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:33.355242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:33.384638    6296 logs.go:282] 0 containers: []
	W1217 02:09:33.384638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:33.384638    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:33.384638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:33.438624    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:33.438624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:33.503148    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:33.504145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:33.542770    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:33.542770    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:33.628872    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:33.616788    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.618355    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.619202    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.622311    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:33.623559    5697 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:33.628872    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:33.628872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:36.163766    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:36.190660    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:36.219485    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.219485    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:36.223169    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:36.253826    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.253826    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:36.257584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:36.289684    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.289684    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:36.293455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:36.321228    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.321228    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:36.326076    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:36.355893    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.355893    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:36.360432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:36.392307    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.392359    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:36.395377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:36.427797    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.427797    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:36.431432    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:36.465462    6296 logs.go:282] 0 containers: []
	W1217 02:09:36.465547    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:36.465590    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:36.465605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:36.515585    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:36.515688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:36.577828    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:36.577828    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:36.617923    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:36.617923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:36.706865    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:36.696037    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.697154    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.698217    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.699314    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:36.700190    5858 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:36.706865    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:36.706865    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.240583    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:39.269426    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:39.300548    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.300548    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:39.304455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:39.337640    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.337640    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:39.341427    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:39.375280    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.375280    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:39.379328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:39.408206    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.408291    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:39.413138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:39.439760    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.439760    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:39.443728    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:39.470865    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.471120    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:39.477630    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:39.510101    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.510101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:39.515759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:39.545423    6296 logs.go:282] 0 containers: []
	W1217 02:09:39.545494    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:39.545494    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:39.545559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:39.574474    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:39.574474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:39.627410    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:39.627410    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:39.687852    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:39.687852    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:39.730823    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:39.730823    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:39.820771    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:39.809479    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.810890    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.811655    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.814487    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:39.816836    6021 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.326489    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:42.349989    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:42.381673    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.381673    6296 logs.go:284] No container was found matching "kube-apiserver"
	W1217 02:09:37.889672    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:42.385392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:42.414575    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.414575    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:42.418510    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:42.452120    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.452120    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:42.456157    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:42.484625    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.484625    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:42.487782    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:42.520235    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.520235    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:42.525546    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:42.558589    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.558589    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:42.561770    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:42.592364    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.592364    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:42.596368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:42.625522    6296 logs.go:282] 0 containers: []
	W1217 02:09:42.625522    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:42.625522    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:42.625522    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:42.661616    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:42.661616    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:42.748046    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:42.737433    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.739312    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.740542    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.743197    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:42.744170    6164 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:42.748046    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:42.748046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:42.778854    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:42.778854    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:42.827860    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:42.827860    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.394220    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:45.418501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:45.453084    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.453132    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:45.457433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:45.491679    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.491679    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:45.495517    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:45.524934    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.524934    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:45.528788    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:45.559787    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.559837    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:45.563714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:45.608019    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.608104    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:45.612132    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:45.639869    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.639869    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:45.644002    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:45.671767    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.671767    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:45.675466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:45.704056    6296 logs.go:282] 0 containers: []
	W1217 02:09:45.704104    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:45.704104    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:45.704104    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:45.766557    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:45.766557    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:45.807449    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:45.807449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:45.898686    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:45.887850    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.888794    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.889893    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.891161    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:45.894108    6325 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:45.898686    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:45.898686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:45.924614    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:45.924614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.482563    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:48.510137    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:48.546063    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.546063    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:48.551905    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:48.588536    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.588617    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:48.592628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:48.621540    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.621540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:48.625701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:48.653505    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.653505    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:48.659485    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:48.688940    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.689008    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:48.692649    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:48.718858    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.718858    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:48.722907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:48.752451    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.752451    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:48.755913    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:48.785865    6296 logs.go:282] 0 containers: []
	W1217 02:09:48.785903    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:48.785903    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:48.785948    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:48.842730    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:48.843261    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:48.905352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:48.905352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:48.945271    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:48.945271    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:49.027913    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:49.016272    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.017718    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.022195    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.023419    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:49.024431    6503 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:49.027963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:49.027963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:51.563182    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:51.587223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:51.619597    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.619621    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:51.623355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:51.652069    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.652152    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:51.655716    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:51.684602    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.684653    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:51.687735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:51.716327    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.716327    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:51.720054    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:51.750202    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.750266    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:51.753821    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:51.781863    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.781863    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:51.785648    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:51.814791    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.814841    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:51.818565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:51.850654    6296 logs.go:282] 0 containers: []
	W1217 02:09:51.850654    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:51.850654    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:51.850654    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:51.912429    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:51.912429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:51.951795    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:51.951795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:52.035486    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:52.024665    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.026342    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.028055    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.029764    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:52.030775    6649 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:52.035486    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:52.035486    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:52.063472    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:52.063472    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:09:47.930106    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:09:54.631678    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:54.657392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:54.689037    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.689037    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:54.692460    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:54.723231    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.723231    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:54.729158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:54.759168    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.759168    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:54.762883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:54.792371    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.792371    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:54.796165    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:54.828375    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.828375    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:54.832201    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:54.862409    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.862476    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:54.866107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:54.897161    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.897161    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:54.900834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:54.947452    6296 logs.go:282] 0 containers: []
	W1217 02:09:54.947452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:54.947452    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:54.947452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:55.016411    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:55.016411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:55.055628    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:55.055628    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:55.152557    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:55.141168    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.142077    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.145931    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.147597    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:55.148932    6812 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:55.152599    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:55.152599    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:55.180492    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:55.180492    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:09:57.741989    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:09:57.768328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:09:57.799200    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.799200    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:09:57.803065    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:09:57.832042    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.832042    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:09:57.835921    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:09:57.863829    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.863891    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:09:57.867347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:09:57.896797    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.896822    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:09:57.900369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:09:57.929832    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.929907    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:09:57.933326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:09:57.960278    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.960278    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:09:57.964215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:09:57.992277    6296 logs.go:282] 0 containers: []
	W1217 02:09:57.992324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:09:57.995951    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:09:58.026155    6296 logs.go:282] 0 containers: []
	W1217 02:09:58.026254    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:09:58.026254    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:09:58.026303    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:09:58.091999    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:09:58.091999    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:09:58.131520    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:09:58.131520    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:09:58.226831    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:09:58.216784    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.218266    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.219997    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.221198    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:09:58.222992    6975 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:09:58.226831    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:09:58.226831    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:09:58.256592    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:09:58.256635    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:00.809919    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:00.842222    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:00.872955    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.872955    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:00.876666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:00.906031    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.906031    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:00.909593    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:00.939873    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.939946    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:00.943346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:00.972609    6296 logs.go:282] 0 containers: []
	W1217 02:10:00.972643    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:00.975886    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:01.005269    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.005269    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:01.009766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:01.041677    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.041677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:01.048361    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:01.081235    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.081312    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:01.084849    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:01.113437    6296 logs.go:282] 0 containers: []
	W1217 02:10:01.113437    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:01.113437    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:01.113437    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:01.160067    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:01.160624    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:01.225071    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:01.225071    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:01.265307    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:01.265307    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:01.348506    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:01.336920    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.338210    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.339738    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.341232    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:01.342188    7160 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:01.348535    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:01.348571    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:09:57.967423    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:03.891628    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:03.925404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:03.965688    6296 logs.go:282] 0 containers: []
	W1217 02:10:03.965688    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:03.968982    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:04.006348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.006348    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:04.009769    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:04.039968    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.039968    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:04.044404    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:04.078472    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.078472    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:04.081894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:04.113348    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.113348    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:04.117138    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:04.148885    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.148885    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:04.152756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:04.181559    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.181616    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:04.185351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:04.217017    6296 logs.go:282] 0 containers: []
	W1217 02:10:04.217017    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:04.217017    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:04.217017    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:04.284540    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:04.284540    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:04.324402    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:04.324402    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:04.409943    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:04.395416    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.396326    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.402206    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.403321    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:04.404006    7311 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:04.409943    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:04.409943    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:04.438771    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:04.438771    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:06.997897    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:07.024185    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:07.054915    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.055512    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:07.060167    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:07.089778    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.089778    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:07.093773    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:07.124641    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.124641    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:07.128016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:07.154834    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.154915    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:07.158505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:07.188568    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.188568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:07.192962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:07.225078    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.225078    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:07.228699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:07.258599    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.258659    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:07.262590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:07.291623    6296 logs.go:282] 0 containers: []
	W1217 02:10:07.291623    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:07.291623    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:07.291623    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:07.322611    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:07.322611    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:07.374970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:07.374970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:07.438795    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:07.438795    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:07.479442    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:07.479442    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:07.566162    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:07.555486    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.557015    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.558199    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559195    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:07.559622    7493 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.072312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:10.096505    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:10.125617    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.125617    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:10.129377    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:10.157921    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.157921    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:10.161850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:10.191705    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.191705    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:10.196003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:10.224412    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.224482    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:10.229368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:10.258140    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.258140    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:10.261205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:10.292047    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.292047    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:10.296511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:10.325818    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.325818    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:10.329752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:10.359454    6296 logs.go:282] 0 containers: []
	W1217 02:10:10.359530    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:10.359530    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:10.359530    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:10.413970    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:10.413970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:10.476665    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:10.476665    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:10.516335    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:10.516335    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:10.602353    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:10.592838    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.594139    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.595393    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.596552    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:10.597619    7654 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:10.602353    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:10.602353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	W1217 02:10:08.007712    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:13.134148    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:13.159720    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:13.191534    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.191534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:13.195626    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:13.230035    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.230035    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:13.233817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:13.266476    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.266476    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:13.270598    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:13.305852    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.305852    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:13.310349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:13.341805    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.341867    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:13.345346    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:13.377945    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.377945    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:13.381659    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:13.411885    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.411957    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:13.416039    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:13.446642    6296 logs.go:282] 0 containers: []
	W1217 02:10:13.446642    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:13.446642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:13.446642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:13.487083    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:13.487083    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:13.574632    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:13.564930    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.565686    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.568158    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.569159    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:13.570310    7794 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:13.574632    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:13.574632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:13.604181    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:13.604702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:13.660020    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:13.660020    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.225038    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:16.248922    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:16.280247    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.280247    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:16.284285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:16.312596    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.312596    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:16.316952    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:16.345108    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.345108    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:16.348083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:16.377403    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.377403    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:16.380619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:16.410555    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.410555    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:16.414048    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:16.446454    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.446454    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:16.449405    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:16.478967    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.478967    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:16.484108    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:16.516422    6296 logs.go:282] 0 containers: []
	W1217 02:10:16.516422    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:16.516422    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:16.516422    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:16.580305    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:16.580305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:16.618663    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:16.618663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:16.705105    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:16.694074    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.695040    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.696842    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.698676    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:16.700646    7956 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:16.705105    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:16.705105    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:16.732046    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:16.732046    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:19.284431    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:19.307909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:19.340842    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.340842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:19.344830    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:19.371150    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.371150    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:19.374863    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:19.403216    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.403216    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:19.406907    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:19.433979    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.433979    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:19.438046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:19.469636    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.469636    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:19.473675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:19.504296    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.504296    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:19.508671    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:19.535932    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.535932    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:19.539707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:19.567355    6296 logs.go:282] 0 containers: []
	W1217 02:10:19.567416    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:19.567416    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:19.567416    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:19.629876    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:19.629876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:19.678547    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:19.678547    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:19.785306    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:19.776195    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.777270    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.778111    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.779442    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:19.780820    8116 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:19.785306    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:19.785371    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:19.813137    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:19.813137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.369643    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W1217 02:10:18.049946    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:22.396731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:22.431018    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.431018    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:22.434688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:22.463307    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.463307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:22.467323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:22.497065    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.497065    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:22.500574    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:22.531497    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.531564    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:22.535088    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:22.563706    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.563779    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:22.567344    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:22.602516    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.602597    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:22.606242    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:22.637637    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.637699    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:22.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:22.668078    6296 logs.go:282] 0 containers: []
	W1217 02:10:22.668078    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:22.668078    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:22.668078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:22.754963    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:22.744973    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.745956    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.748143    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.749016    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:22.751155    8271 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:22.754963    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:22.754963    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:22.783172    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:22.783222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:22.840048    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:22.840048    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:22.900137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:22.900137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.445900    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:25.472646    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:25.502929    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.502929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:25.506274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:25.537721    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.537721    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:25.543044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:25.572924    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.572924    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:25.576391    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:25.607737    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.607798    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:25.611457    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:25.644967    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.645041    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:25.648690    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:25.677801    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.677801    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:25.681530    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:25.709148    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.709148    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:25.715667    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:25.746892    6296 logs.go:282] 0 containers: []
	W1217 02:10:25.746892    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:25.746892    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:25.746892    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:25.796336    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:25.796336    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:25.862353    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:25.862353    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:25.902100    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:25.902100    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:25.988926    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:25.979946    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.980923    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.983755    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.985453    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:25.986609    8446 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:25.988926    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:25.988926    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:28.523475    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:28.549366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:28.580055    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.580055    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:28.583822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:28.615168    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.615168    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:28.618724    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:28.650344    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.650368    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:28.654014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:28.704033    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.704033    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:28.707699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:28.738871    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.738938    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:28.743270    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:28.775432    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.775432    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:28.779176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:28.810234    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.810351    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:28.814357    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:28.845783    6296 logs.go:282] 0 containers: []
	W1217 02:10:28.845783    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:28.845783    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:28.845783    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:28.902626    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:28.902626    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:28.963758    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:28.963758    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:29.002141    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:29.002141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:29.104674    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:29.094415    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.095636    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.096872    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.097927    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:29.099112    8618 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:29.104674    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:29.104674    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:31.640270    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:31.668862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:31.703099    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.703099    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:31.706355    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:31.737408    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.737408    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:31.741549    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:31.771462    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.771549    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:31.775645    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:31.803600    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.803600    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:31.807313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:31.835884    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.835884    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:31.840000    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:31.870518    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.870518    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:31.877548    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:31.905387    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.905387    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:31.909722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:31.938258    6296 logs.go:282] 0 containers: []
	W1217 02:10:31.938284    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:31.938284    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:31.938284    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:32.000115    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:32.000115    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:32.039351    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:32.039351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:32.128849    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:32.117556    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.118519    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.121192    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.122137    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:32.123350    8765 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:32.128849    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:32.128849    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:32.155670    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:32.155670    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:28.083644    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:34.707099    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:34.732689    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:34.763625    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.763625    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:34.767349    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:34.797435    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.797435    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:34.801415    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:34.828785    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.828785    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:34.832654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:34.864748    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.864748    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:34.868392    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:34.896365    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.896365    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:34.900474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:34.932681    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.932681    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:34.936571    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:34.966056    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.966056    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:34.969208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:34.998362    6296 logs.go:282] 0 containers: []
	W1217 02:10:34.998362    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:34.998362    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:34.998362    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:35.036977    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:35.036977    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:35.134841    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:35.123096    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.125161    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.126319    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.127728    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:35.129900    8920 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:35.134841    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:35.134841    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:35.162429    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:35.162429    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:35.213960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:35.214015    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:37.779857    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:37.806799    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:37.840730    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.840730    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:37.846443    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:37.875504    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.875504    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:37.879215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:37.910068    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.910068    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:37.913551    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:37.942897    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.942897    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:37.946741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:37.978321    6296 logs.go:282] 0 containers: []
	W1217 02:10:37.978321    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:37.982267    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:38.008421    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.008421    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:38.013043    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:38.043041    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.043041    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:38.049737    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:38.082117    6296 logs.go:282] 0 containers: []
	W1217 02:10:38.082117    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:38.082117    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:38.082117    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:38.148970    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:38.148970    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:38.189697    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:38.189697    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:38.276122    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:38.265842    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.267106    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.268317    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.270927    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:38.272044    9087 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:38.276122    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:38.276122    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:38.304355    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:38.304355    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:40.862712    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:40.889041    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:40.921169    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.921169    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:40.924297    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:40.956313    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.956356    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:40.960294    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:40.990144    6296 logs.go:282] 0 containers: []
	W1217 02:10:40.990144    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:40.993876    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:41.026732    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.026803    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:41.030745    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:41.073825    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.073825    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:41.078152    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:41.105859    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.105859    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:41.111714    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:41.143286    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.143324    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:41.146776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:41.176314    6296 logs.go:282] 0 containers: []
	W1217 02:10:41.176345    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:41.176345    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:41.176345    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:41.213266    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:41.213266    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:41.300305    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:41.290426    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.291562    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.292511    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.293690    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:41.294979    9246 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:41.300305    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:41.300305    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:41.328560    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:41.328621    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:41.375953    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:41.375953    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1217 02:10:38.119927    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:43.941613    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:43.967455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:44.000199    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.000199    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:44.003568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:44.035058    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.035058    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:44.040590    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:44.083687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.083687    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:44.087476    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:44.115776    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.115776    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:44.119318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:44.155471    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.155513    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:44.159433    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:44.191599    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.191636    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:44.195145    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:44.228181    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.228211    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:44.231971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:44.259687    6296 logs.go:282] 0 containers: []
	W1217 02:10:44.259763    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:44.259763    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:44.259763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:44.323705    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:44.323705    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:44.365401    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:44.365401    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:44.453893    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:44.444848    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.446165    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.447569    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.449198    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:44.450326    9406 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:44.453893    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:44.453893    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:44.480694    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:44.480694    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.042501    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:47.067663    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:47.108433    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.108433    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:47.112206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:47.144336    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.144336    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:47.148449    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:47.182968    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.183049    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:47.186614    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:47.215738    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.215738    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:47.219595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:47.248444    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.248511    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:47.252434    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:47.280975    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.280975    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:47.284966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:47.317178    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.317178    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:47.321223    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:47.352638    6296 logs.go:282] 0 containers: []
	W1217 02:10:47.352638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:47.352638    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:47.352638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:47.390049    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:47.390049    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:47.479425    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:47.469913    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.471092    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.472262    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.473545    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:47.474680    9563 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:47.479425    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:47.479425    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:47.505331    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:47.505331    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:47.556431    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:47.556431    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.124255    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:50.151100    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:50.184499    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.184565    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:50.187696    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:50.221764    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.221764    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:50.225471    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:50.253823    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.253823    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:50.260470    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:50.289768    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.289815    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:50.295283    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:50.321597    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.321597    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:50.325774    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:50.356707    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.356707    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:50.360685    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:50.390099    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.390099    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:50.393971    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:50.420950    6296 logs.go:282] 0 containers: []
	W1217 02:10:50.420950    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:50.420950    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:50.420950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:50.484730    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:50.484730    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:50.523997    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:50.523997    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:50.618256    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:50.607046    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.608047    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.610609    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.611743    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:50.612938    9726 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:50.618256    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:50.618256    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:50.645077    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:50.645077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:10:48.158175    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:10:53.200622    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:53.223348    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:53.253589    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.253589    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:53.258688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:53.287647    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.287689    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:53.291555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:53.324358    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.324403    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:53.327650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:53.355417    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.355417    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:53.359780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:53.390012    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.390012    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:53.393536    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:53.420636    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.420672    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:53.424429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:53.453665    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.453744    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:53.456764    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:53.486769    6296 logs.go:282] 0 containers: []
	W1217 02:10:53.486836    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:53.486875    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:53.486875    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:53.552513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:53.552513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:53.593054    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:53.593054    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:53.683171    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:53.673168    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.674217    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.677093    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.678848    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:53.679784    9885 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:53.683207    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:53.683230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:53.712513    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:53.712513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:56.288600    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:56.314380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:56.347447    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.347447    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:56.351158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:56.381779    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.381779    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:56.385232    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:56.423000    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.423000    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:56.427083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:56.456635    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.456635    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:56.460509    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:56.490868    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.490868    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:56.496594    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:56.523671    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.523671    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:56.527847    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:56.559992    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.559992    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:56.565352    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:56.591708    6296 logs.go:282] 0 containers: []
	W1217 02:10:56.591708    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:56.591708    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:56.591708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:56.656572    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:56.656572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:56.696334    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:56.696334    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:56.788411    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:56.777962   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.779251   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.780163   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.782593   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:56.783670   10054 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:10:56.788411    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:56.788411    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:56.815762    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:56.815762    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.370676    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:10:59.404615    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:10:59.440735    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.440735    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:10:59.446758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:10:59.475209    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.475209    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:10:59.479521    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:10:59.509465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.509465    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:10:59.513228    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:10:59.542409    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.542409    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:10:59.546008    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:10:59.575778    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.575778    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:10:59.579759    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:10:59.613465    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.613465    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:10:59.617266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:10:59.645245    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.645245    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:10:59.649170    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:10:59.680413    6296 logs.go:282] 0 containers: []
	W1217 02:10:59.680449    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:10:59.680449    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:10:59.680449    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:10:59.713987    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:10:59.713987    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:10:59.764930    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:10:59.764994    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:10:59.832077    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:10:59.832077    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:10:59.870681    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:10:59.870681    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:10:59.953336    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:10:59.942085   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.942906   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.945651   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.947051   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:10:59.948218   10241 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W1217 02:10:58.200115    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	I1217 02:11:02.457745    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:02.492666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:02.526665    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.526665    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:02.530862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:02.560353    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.560413    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:02.564099    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:02.595430    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.595430    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:02.599884    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:02.629744    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.629744    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:02.633637    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:02.662623    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.662623    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:02.666817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:02.694696    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.694696    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:02.698194    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:02.727384    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.727442    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:02.731483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:02.766114    6296 logs.go:282] 0 containers: []
	W1217 02:11:02.766114    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:02.766114    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:02.766114    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:02.830755    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:02.830755    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:02.870216    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:02.870216    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:02.958327    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:02.947356   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.948306   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.949403   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.950298   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:02.952486   10384 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:02.958327    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:02.958380    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:02.984980    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:02.984980    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.540158    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:05.564812    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:05.595638    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.595638    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:05.599748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:05.628748    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.628748    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:05.632878    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:05.666232    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.666257    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:05.670293    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:05.699654    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.699654    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:05.703004    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:05.733113    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.733113    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:05.737096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:05.765591    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.765639    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:05.770398    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:05.796360    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.796360    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:05.800240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:05.829847    6296 logs.go:282] 0 containers: []
	W1217 02:11:05.829914    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:05.829914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:05.829945    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:05.880789    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:05.880789    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:05.943002    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:05.943002    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:05.983389    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:05.983389    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:06.076023    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:06.063780   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.064562   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.067564   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.069726   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:06.070666   10559 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:06.076023    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:06.076023    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:08.608606    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:08.632215    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:08.665017    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.665017    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:08.669299    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:08.695355    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.695355    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:08.699306    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:08.729054    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.729054    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:08.732454    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:08.759881    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.759881    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:08.764328    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:08.793695    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.793777    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:08.797908    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:08.826225    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.826225    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:08.829679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:08.859645    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.859645    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:08.863083    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:08.893657    6296 logs.go:282] 0 containers: []
	W1217 02:11:08.893657    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:08.893657    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:08.893657    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:08.958163    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:08.958163    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:08.997418    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:08.997418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:09.087973    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:09.074815   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.076834   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.078823   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.080747   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:09.081590   10705 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:09.087973    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:09.087973    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:09.115687    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:09.115687    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:11.697770    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:11.725676    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:11.758809    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.758809    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:11.762929    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:11.794198    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.794198    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:11.798023    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:11.828890    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.828890    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:11.833358    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:11.865217    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.865217    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:11.868915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:11.897672    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.897672    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:11.901235    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:11.931725    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.931808    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:11.935264    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:11.966263    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.966263    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:11.970422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:11.999856    6296 logs.go:282] 0 containers: []
	W1217 02:11:11.999856    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:11.999856    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:11.999856    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:12.064137    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:12.064137    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:12.102491    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:12.102491    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:12.183568    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:12.174095   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.175081   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.176122   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.177427   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:12.178548   10862 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:12.183568    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:12.183568    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:12.212178    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:12.212178    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W1217 02:11:08.241744    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): Get "https://127.0.0.1:63565/api/v1/nodes/no-preload-184000": EOF
	W1217 02:11:16.871278    6768 node_ready.go:55] error getting node "no-preload-184000" condition "Ready" status (will retry): client rate limiter Wait returned an error: context deadline exceeded - error from a previous attempt: EOF
	I1217 02:11:16.871278    6768 node_ready.go:38] duration metric: took 6m0.0008728s for node "no-preload-184000" to be "Ready" ...
	I1217 02:11:16.874572    6768 out.go:203] 
	W1217 02:11:16.876457    6768 out.go:285] X Exiting due to GUEST_START: failed to start node: wait 6m0s for node: waiting for node to be ready: WaitNodeCondition: context deadline exceeded
	W1217 02:11:16.876457    6768 out.go:285] * 
	W1217 02:11:16.879042    6768 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1217 02:11:16.881673    6768 out.go:203] 
	I1217 02:11:14.772821    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:14.797656    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:14.826900    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.826900    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:14.829894    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:14.859202    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.859202    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:14.862783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:14.891414    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.891414    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:14.895052    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:14.925404    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.925404    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:14.928966    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:14.959295    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.959330    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:14.962893    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:14.991696    6296 logs.go:282] 0 containers: []
	W1217 02:11:14.991730    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:14.994776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:15.025468    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.025468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:15.031674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:15.060661    6296 logs.go:282] 0 containers: []
	W1217 02:11:15.060661    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:15.060733    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:15.060733    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:15.120513    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:15.120513    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:15.159608    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:15.159608    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:15.244418    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:15.235611   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.236439   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.238662   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.239643   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:15.240776   11025 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:15.244418    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:15.244418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:15.271288    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:15.271288    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:17.830556    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:17.850600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:17.886696    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.886696    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:17.890674    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:17.921702    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.921702    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:17.924697    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:17.952692    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.952692    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:17.956701    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:17.984691    6296 logs.go:282] 0 containers: []
	W1217 02:11:17.984691    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:17.988655    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:18.024626    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.024663    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:18.028558    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:18.060310    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.060310    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:18.064024    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:18.100124    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.100124    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:18.104105    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:18.141223    6296 logs.go:282] 0 containers: []
	W1217 02:11:18.141223    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:18.141223    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:18.141223    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:18.179686    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:18.179686    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:18.311240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:18.298507   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.299764   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.301130   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.305360   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:18.306018   11185 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:18.311240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:18.311240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:18.342566    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:18.342615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:18.393872    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:18.393872    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:20.977693    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:21.006733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:21.035136    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.035201    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:21.039202    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:21.069636    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.069636    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:21.075448    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:21.105437    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.105437    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:21.108735    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:21.136602    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.136602    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:21.140124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:21.168674    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.168674    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:21.172368    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:21.204723    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.204723    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:21.208123    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:21.237130    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.237130    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:21.240654    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:21.268170    6296 logs.go:282] 0 containers: []
	W1217 02:11:21.268170    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:21.268170    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:21.268170    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:21.333642    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:21.333642    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:21.372230    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:21.372230    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:21.467012    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:21.456191   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457465   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.457898   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.460543   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:21.461536   11355 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:21.467012    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:21.467012    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:21.495867    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:21.495867    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.053568    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:24.079587    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:24.110362    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.110399    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:24.113326    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:24.141818    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.141818    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:24.145313    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:24.172031    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.172031    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:24.176197    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:24.205114    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.205133    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:24.208437    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:24.238244    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.238244    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:24.242692    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:24.271687    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.271687    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:24.276384    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:24.307922    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.307922    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:24.311538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:24.350108    6296 logs.go:282] 0 containers: []
	W1217 02:11:24.350108    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:24.350108    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:24.350108    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:24.402159    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:24.402224    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:24.463824    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:24.463824    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:24.503645    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:24.503645    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:24.591969    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:24.584283   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.585294   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.586182   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.588436   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:24.589378   11542 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:24.591969    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:24.591969    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.123965    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:27.157839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:27.199991    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.199991    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:27.204206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:27.231981    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.231981    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:27.235568    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:27.265668    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.265668    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:27.269162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:27.299488    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.299488    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:27.303277    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:27.335769    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.335769    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:27.339516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:27.369112    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.369112    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:27.372881    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:27.402031    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.402031    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:27.405780    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:27.436610    6296 logs.go:282] 0 containers: []
	W1217 02:11:27.436610    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:27.436610    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:27.436610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:27.523394    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:27.514396   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.515456   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.516979   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.518950   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:27.519928   11673 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:27.523917    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:27.523957    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:27.552476    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:27.552476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:27.607026    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:27.607078    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:27.670834    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:27.670834    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.216027    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:30.241711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:30.272275    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.272275    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:30.276071    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:30.304635    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.304635    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:30.307639    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:30.340374    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.340374    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:30.343758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:30.374162    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.374162    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:30.378010    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:30.407836    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.407836    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:30.411411    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:30.440002    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.440002    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:30.443429    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:30.472647    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.472647    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:30.476538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:30.510744    6296 logs.go:282] 0 containers: []
	W1217 02:11:30.510744    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:30.510744    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:30.510744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:30.575069    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:30.575156    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:30.639732    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:30.640731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:30.685195    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:30.685195    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:30.775246    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:30.762447   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.763441   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.764998   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.765913   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:30.768466   11864 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:30.775295    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:30.775295    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.308109    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:33.334329    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:33.365061    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.365061    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:33.370854    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:33.399488    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.399488    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:33.406335    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:33.436434    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.436434    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:33.439783    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:33.468947    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.468947    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:33.474014    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:33.502568    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.502568    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:33.506146    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:33.535706    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.535706    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:33.540016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:33.573811    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.573811    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:33.577712    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:33.606321    6296 logs.go:282] 0 containers: []
	W1217 02:11:33.606321    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:33.606321    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:33.606321    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:33.671884    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:33.671884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:33.712095    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:33.712095    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:33.800767    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:33.788569   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.789526   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.793280   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.794779   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:33.795796   12010 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:33.800848    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:33.800884    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:33.829402    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:33.829474    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:36.410236    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:36.438912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:36.468229    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.468229    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:36.472231    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:36.501220    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.501220    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:36.506462    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:36.539556    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.539556    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:36.543603    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:36.584367    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.584367    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:36.588513    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:36.620670    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.620670    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:36.626030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:36.654239    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.654239    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:36.658962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:36.689023    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.689023    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:36.693754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:36.721351    6296 logs.go:282] 0 containers: []
	W1217 02:11:36.721351    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:36.721351    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:36.721351    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:36.787832    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:36.787832    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:36.828019    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:36.828019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:36.916923    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:36.906317   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.907259   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.909560   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.910589   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:36.911494   12168 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:36.916923    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:36.916923    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:36.946231    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:36.946265    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.498459    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:39.522909    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:39.553462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.553462    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:39.557524    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:39.585462    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.585462    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:39.591342    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:39.619332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.619399    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:39.623096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:39.651071    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.651071    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:39.654766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:39.683502    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.683502    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:39.687390    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:39.715332    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.715332    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:39.718932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:39.749019    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.749019    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:39.752739    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:39.783378    6296 logs.go:282] 0 containers: []
	W1217 02:11:39.783378    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:39.783378    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:39.783378    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:39.835019    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:39.835019    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:39.899542    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:39.899542    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:39.938717    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:39.938717    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:40.026359    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:40.016461   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.017619   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.018723   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.019917   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:40.021008   12341 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:40.026403    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:40.026446    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:42.561805    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:42.585507    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:42.613091    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.613091    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:42.616991    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:42.647608    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.647608    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:42.651380    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:42.680540    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.680540    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:42.683625    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:42.717014    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.717014    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:42.721369    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:42.750017    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.750017    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:42.753961    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:42.785164    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.785164    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:42.788883    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:42.817424    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.817424    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:42.821266    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:42.853247    6296 logs.go:282] 0 containers: []
	W1217 02:11:42.853247    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:42.853247    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:42.853247    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:42.910034    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:42.910052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:42.970436    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:42.970436    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:43.009833    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:43.010830    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:43.102803    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:43.091179   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.092013   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.095588   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.097098   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:43.098447   12505 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:43.102803    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:43.102803    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.636418    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:45.661677    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:45.695141    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.695141    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:45.699189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:45.729376    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.729376    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:45.733753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:45.764365    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.764365    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:45.767917    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:45.799287    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.799287    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:45.802968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:45.835270    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.835270    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:45.838766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:45.868660    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.868660    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:45.875727    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:45.903566    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.903566    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:45.907562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:45.937452    6296 logs.go:282] 0 containers: []
	W1217 02:11:45.937452    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:45.937452    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:45.937452    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:45.965091    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:45.965091    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:46.013173    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:46.013173    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:46.077113    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:46.077113    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:46.118527    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:46.118527    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:46.207662    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:46.198319   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.199665   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.200697   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.201868   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:46.202946   12666 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:48.714055    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:48.741412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:48.772767    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.772767    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:48.776092    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:48.804946    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.805020    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:48.808538    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:48.837488    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.837488    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:48.840453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:48.871139    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.871139    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:48.875518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:48.904264    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.904264    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:48.911351    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:48.939118    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.939118    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:48.943340    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:48.970934    6296 logs.go:282] 0 containers: []
	W1217 02:11:48.970934    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:48.974990    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:49.005140    6296 logs.go:282] 0 containers: []
	W1217 02:11:49.005174    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:49.005205    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:49.005234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:49.075925    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:49.075925    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:49.116144    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:49.116144    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:49.196968    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:49.188036   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.189151   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.190274   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.191246   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:49.192420   12807 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:49.197074    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:49.197074    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:49.222883    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:49.223404    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:51.783312    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:51.809151    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:51.839751    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.839751    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:51.844016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:51.895178    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.895178    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:51.899341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:51.930311    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.930311    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:51.933797    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:51.961857    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.961857    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:51.966036    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:51.993647    6296 logs.go:282] 0 containers: []
	W1217 02:11:51.993647    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:51.997672    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:52.026485    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.026485    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:52.032726    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:52.062039    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.062039    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:52.066379    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:52.096772    6296 logs.go:282] 0 containers: []
	W1217 02:11:52.096772    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:52.096772    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:52.096772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:52.163369    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:52.163369    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:52.203719    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:52.203719    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:52.295324    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:52.285688   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.286944   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.288407   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.289493   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:52.290536   12965 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:52.295324    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:52.295324    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:52.323234    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:52.323234    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:54.878824    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:54.907441    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:54.944864    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.944864    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:54.948030    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:54.980769    6296 logs.go:282] 0 containers: []
	W1217 02:11:54.980769    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:54.987506    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:55.019726    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.019726    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:55.024226    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:55.052618    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.052618    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:55.056658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:55.085528    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.085607    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:55.089212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:55.120453    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.120525    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:55.124591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:55.154725    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.154749    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:55.157707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:55.187692    6296 logs.go:282] 0 containers: []
	W1217 02:11:55.187692    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:55.187692    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:55.187692    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:55.252848    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:55.252848    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:55.318197    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:55.318197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:55.358145    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:55.358145    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:55.439213    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:55.430988   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.431927   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.433074   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.434586   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:55.435691   13158 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:55.439213    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:55.439744    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:11:57.972346    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:11:57.997412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:11:58.029794    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.029794    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:11:58.033582    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:11:58.064729    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.064729    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:11:58.068722    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:11:58.103854    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.103854    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:11:58.107069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:11:58.140767    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.140767    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:11:58.145080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:11:58.172792    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.172792    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:11:58.177038    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:11:58.205809    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.205809    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:11:58.209371    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:11:58.236353    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.236353    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:11:58.240621    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:11:58.269469    6296 logs.go:282] 0 containers: []
	W1217 02:11:58.269469    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:11:58.269469    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:11:58.269469    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:11:58.324960    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:11:58.324960    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:11:58.384708    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:11:58.384708    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:11:58.423476    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:11:58.423476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:11:58.512328    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:11:58.500192   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.501577   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.503665   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.506831   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:11:58.509044   13320 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:11:58.512387    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:11:58.512387    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.044354    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:01.073699    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:01.104765    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.104836    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:01.107915    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:01.141131    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.141131    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:01.145209    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:01.174536    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.174536    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:01.178187    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:01.209172    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.209172    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:01.212803    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:01.241435    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.241486    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:01.245545    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:01.277115    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.277115    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:01.281366    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:01.312158    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.312158    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:01.316725    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:01.343220    6296 logs.go:282] 0 containers: []
	W1217 02:12:01.343220    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:01.343220    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:01.343220    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:01.382233    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:01.382233    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:01.487570    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:01.476084   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.477142   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.479990   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.481020   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:01.482426   13465 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:01.488578    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:01.488578    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:01.514572    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:01.514572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:01.567754    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:01.567754    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.140604    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:04.165376    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:04.197379    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.197379    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:04.202896    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:04.231436    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.231506    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:04.235354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:04.267960    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.267960    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:04.271789    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:04.301108    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.301108    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:04.305219    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:04.334515    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.334515    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:04.338693    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:04.366071    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.366071    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:04.369958    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:04.398457    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.398457    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:04.405087    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:04.432495    6296 logs.go:282] 0 containers: []
	W1217 02:12:04.432495    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:04.432495    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:04.432495    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:04.492454    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:04.492454    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:04.530878    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:04.530878    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:04.615739    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:04.603893   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.604965   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.606519   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.608498   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:04.609457   13631 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:04.615739    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:04.615739    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:04.643270    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:04.643304    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.195429    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:07.221998    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:07.254842    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.254842    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:07.258578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:07.291820    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.291820    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:07.297979    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:07.329603    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.329603    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:07.334181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:07.363276    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.363324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:07.367248    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:07.394630    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.394695    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:07.398679    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:07.425998    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.425998    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:07.429814    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:07.458824    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.458878    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:07.462682    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:07.490543    6296 logs.go:282] 0 containers: []
	W1217 02:12:07.490614    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:07.490614    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:07.490614    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:07.575806    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:07.562525   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.563684   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.568204   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.569084   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:07.572372   13789 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:07.575806    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:07.576816    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:07.607910    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:07.607910    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:07.659155    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:07.659155    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:07.722240    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:07.722240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.270711    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:10.295753    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:10.324920    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.324920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:10.328903    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:10.358180    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.358218    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:10.362249    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:10.390135    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.390135    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:10.393738    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:10.423058    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.423090    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:10.426534    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:10.456745    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.456745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:10.463439    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:10.493765    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.493765    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:10.497858    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:10.526425    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.526425    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:10.532217    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:10.563338    6296 logs.go:282] 0 containers: []
	W1217 02:12:10.563338    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:10.563338    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:10.563338    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:10.627669    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:10.627669    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:10.666455    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:10.666455    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:10.755613    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:10.742575   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.744309   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.748746   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.750149   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:10.751294   13955 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:10.755613    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:10.755613    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:10.786516    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:10.787045    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:13.342631    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:13.368870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:13.402304    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.402347    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:13.408012    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:13.436633    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.436710    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:13.439877    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:13.468754    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.469007    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:13.473752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:13.505247    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.505324    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:13.509766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:13.538745    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.538745    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:13.542743    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:13.571986    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.571986    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:13.575522    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:13.604002    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.604002    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:13.608063    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:13.636028    6296 logs.go:282] 0 containers: []
	W1217 02:12:13.636028    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:13.636028    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:13.636028    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:13.701418    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:13.701418    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:13.740729    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:13.740729    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:13.830687    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:13.819650   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.820972   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.822197   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.823236   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:13.826085   14114 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:13.830746    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:13.830768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:13.856732    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:13.856732    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.415071    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:16.441827    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:16.474920    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.474920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:16.478560    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:16.509149    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.509149    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:16.512927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:16.544114    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.544114    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:16.547867    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:16.578111    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.578111    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:16.581776    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:16.610586    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.610586    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:16.614807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:16.644103    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.644103    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:16.647954    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:16.692289    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.692289    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:16.696153    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:16.727229    6296 logs.go:282] 0 containers: []
	W1217 02:12:16.727229    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:16.727229    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:16.727229    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:16.823236    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:16.813914   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.815339   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.816582   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.817632   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:16.818568   14273 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:16.823236    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:16.823236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:16.849827    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:16.849827    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:16.905388    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:16.905414    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:16.965153    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:16.965153    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.511192    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:19.537347    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:19.568920    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.568920    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:19.573318    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:19.604587    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.604587    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:19.608244    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:19.637707    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.637732    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:19.641314    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:19.669047    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.669047    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:19.672932    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:19.703243    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.703243    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:19.706862    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:19.738948    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.738948    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:19.742483    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:19.773620    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.773620    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:19.777766    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:19.807218    6296 logs.go:282] 0 containers: []
	W1217 02:12:19.807218    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:19.807218    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:19.807218    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:19.872750    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:19.872750    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:19.912835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:19.912835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:19.997398    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:19.986540   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.987576   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.989197   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.992124   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:19.993453   14438 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:19.997398    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:19.997398    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:20.025629    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:20.025629    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:22.593289    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:22.619754    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:22.652929    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.652929    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:22.657635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:22.689768    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.689846    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:22.693504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:22.720087    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.720087    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:22.723840    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:22.752902    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.752959    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:22.757109    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:22.787369    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.787369    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:22.791584    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:22.822117    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.822117    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:22.825675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:22.856022    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.856022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:22.859609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:22.886982    6296 logs.go:282] 0 containers: []
	W1217 02:12:22.886982    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:22.886982    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:22.886982    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:22.972988    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:22.964488   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.965494   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.966951   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.967984   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:22.968891   14590 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:22.972988    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:22.972988    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:23.002037    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:23.002037    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:23.061548    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:23.061548    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:23.124352    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:23.124352    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:25.670974    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:25.706279    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:25.741150    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.741150    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:25.745079    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:25.773721    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.773782    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:25.779777    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:25.808516    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.808516    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:25.813011    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:25.844755    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.844755    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:25.848591    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:25.877332    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.877332    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:25.881053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:25.907973    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.907973    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:25.914424    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:25.941138    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.941138    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:25.945025    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:25.974760    6296 logs.go:282] 0 containers: []
	W1217 02:12:25.974760    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:25.974760    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:25.974760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:26.012354    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:26.012354    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:26.113177    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:26.103007   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.104679   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.105508   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.108836   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:26.110003   14762 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:26.113177    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:26.113177    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:26.144162    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:26.144245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:26.194605    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:26.195138    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:28.763811    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:28.789762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:28.820544    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.820544    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:28.824807    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:28.855728    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.855728    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:28.860354    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:28.894655    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.894655    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:28.898069    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:28.928310    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.928394    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:28.932124    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:28.967209    6296 logs.go:282] 0 containers: []
	W1217 02:12:28.967209    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:28.973126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:29.002975    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.003024    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:29.006839    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:29.044805    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.044881    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:29.049158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:29.078108    6296 logs.go:282] 0 containers: []
	W1217 02:12:29.078142    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:29.078174    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:29.078202    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:29.142751    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:29.142751    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:29.182082    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:29.182082    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:29.271566    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:29.260263   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.261578   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.262370   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.263821   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:29.265155   14926 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:29.271596    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:29.271643    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:29.299332    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:29.299332    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:31.856743    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:31.882741    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:31.912323    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.912372    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:31.917046    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:31.948587    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.948631    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:31.952095    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:31.981682    6296 logs.go:282] 0 containers: []
	W1217 02:12:31.981682    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:31.985888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:32.022173    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.022173    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:32.026061    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:32.070026    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.070026    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:32.074016    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:32.105255    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.105255    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:32.109062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:32.140873    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.140947    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:32.143941    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:32.172848    6296 logs.go:282] 0 containers: []
	W1217 02:12:32.172876    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:32.172876    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:32.172876    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:32.237207    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:32.237207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:32.275838    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:32.275838    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:32.360656    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:32.349190   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.350542   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.352960   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.354559   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:32.355745   15084 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:32.360656    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:32.360656    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:32.391099    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:32.391099    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:34.970955    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:35.002200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:35.036658    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.036658    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:35.041208    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:35.068998    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.068998    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:35.075758    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:35.105253    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.105253    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:35.109356    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:35.137411    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.137411    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:35.141289    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:35.168542    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.168542    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:35.174717    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:35.204677    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.204677    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:35.209675    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:35.240901    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.240901    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:35.244034    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:35.276453    6296 logs.go:282] 0 containers: []
	W1217 02:12:35.276453    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:35.276453    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:35.276453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:35.341158    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:35.341158    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:35.381822    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:35.381822    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:35.472890    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:35.461861   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.463097   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.464080   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.465245   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:35.466603   15239 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:35.472890    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:35.472890    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:35.501374    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:35.501374    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.054644    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:38.080787    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:38.112397    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.112420    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:38.116070    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:38.144341    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.144396    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:38.148080    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:38.177159    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.177159    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:38.181253    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:38.210000    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.210000    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:38.215709    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:38.243526    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.243526    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:38.247620    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:38.278443    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.278443    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:38.282504    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:38.314414    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.314414    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:38.317968    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:38.345306    6296 logs.go:282] 0 containers: []
	W1217 02:12:38.345306    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:38.345306    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:38.345412    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:38.425240    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:38.414795   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.415865   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.416969   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.418280   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:38.420090   15389 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:38.425240    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:38.425240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:38.455129    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:38.455129    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:38.514775    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:38.514775    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:38.574255    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:38.574255    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.116537    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:41.139650    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:41.169726    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.169814    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:41.173285    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:41.204812    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.204812    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:41.208892    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:41.235980    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.235980    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:41.240200    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:41.271415    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.271415    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:41.275005    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:41.303967    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.303967    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:41.309707    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:41.340401    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.340401    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:41.343688    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:41.374008    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.374008    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:41.377325    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:41.409502    6296 logs.go:282] 0 containers: []
	W1217 02:12:41.409563    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:41.409563    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:41.409610    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:41.472168    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:41.472168    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:41.513098    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:41.513098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:41.601716    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:41.590607   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.591236   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.594281   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.595448   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:41.596679   15551 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:41.601716    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:41.601716    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:41.629092    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:41.629148    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:44.185012    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:44.210566    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:44.242274    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.242274    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:44.248762    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:44.280241    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.280307    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:44.283818    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:44.312929    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.312997    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:44.316643    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:44.343840    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.343840    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:44.347619    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:44.378547    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.378547    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:44.382595    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:44.410908    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.410908    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:44.414686    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:44.448329    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.448329    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:44.453888    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:44.484842    6296 logs.go:282] 0 containers: []
	W1217 02:12:44.484842    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:44.484842    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:44.484842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:44.550740    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:44.550740    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:44.589666    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:44.589666    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:44.677625    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:44.666291   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.667584   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.668804   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.671406   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:44.673722   15715 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:44.677625    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:44.677625    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:44.706051    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:44.706051    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:47.257477    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:47.286845    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:47.315563    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.315563    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:47.319220    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:47.351319    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.351319    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:47.354946    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:47.382237    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.382237    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:47.386106    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:47.415608    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.415608    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:47.419575    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:47.449212    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.449241    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:47.452978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:47.482356    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.482356    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:47.486511    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:47.518156    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.518205    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:47.522254    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:47.550631    6296 logs.go:282] 0 containers: []
	W1217 02:12:47.550631    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:47.550631    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:47.550727    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:47.615950    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:47.615950    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:47.655928    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:47.655928    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:47.744126    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:47.732398   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.733599   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.736473   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.737237   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:47.739895   15882 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:47.744164    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:47.744210    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:47.773502    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:47.773502    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.331328    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:50.368555    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:50.407443    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.407443    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:50.411798    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:50.440520    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.440544    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:50.444430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:50.478050    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.478050    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:50.481848    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:50.513603    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.513658    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:50.517565    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:50.551935    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.552946    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:50.556641    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:50.591171    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.591171    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:50.594981    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:50.624821    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.624821    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:50.628756    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:50.661209    6296 logs.go:282] 0 containers: []
	W1217 02:12:50.661209    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:50.661209    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:50.661209    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:50.693141    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:50.693141    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:50.746322    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:50.746322    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:50.805974    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:50.805974    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:50.844572    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:50.844572    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:50.935133    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:50.925528   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.926281   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.929008   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.930044   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:50.931058   16067 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.441690    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:53.466017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:53.494846    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.494846    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:53.499634    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:53.530839    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.530839    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:53.534661    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:53.567189    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.567189    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:53.571412    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:53.598763    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.598763    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:53.602673    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:53.629791    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.629791    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:53.632953    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:53.662323    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.662323    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:53.665394    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:53.695745    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.695745    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:53.701403    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:53.735348    6296 logs.go:282] 0 containers: []
	W1217 02:12:53.735348    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:53.735348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:53.735348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:53.816532    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:53.807828   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.809036   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.810223   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.811373   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:53.812449   16201 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:53.816532    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:53.816532    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:53.843453    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:53.843453    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:53.893853    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:53.893853    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:53.954759    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:53.954759    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.499506    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:56.525316    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:56.561689    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.561738    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:56.565616    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:56.594009    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.594009    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:56.599822    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:56.624101    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.624101    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:56.628604    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:56.657977    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.658063    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:56.663240    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:56.694316    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.694316    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:56.698763    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:56.728527    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.728527    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:56.734446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:56.765315    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.765315    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:56.769182    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:56.796198    6296 logs.go:282] 0 containers: []
	W1217 02:12:56.796198    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:56.796198    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:56.796198    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:56.864777    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:56.864777    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:12:56.904264    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:12:56.904264    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:12:57.000434    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:12:56.990265   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.991556   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.992920   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.993844   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:12:56.996033   16371 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:12:57.000434    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:12:57.000434    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:12:57.034757    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:12:57.034842    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:12:59.601768    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:12:59.627731    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:12:59.657009    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.657009    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:12:59.660962    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:12:59.690428    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.690428    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:12:59.694181    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:12:59.723517    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.723592    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:12:59.727191    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:12:59.756251    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.756251    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:12:59.759627    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:12:59.791516    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.791516    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:12:59.795602    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:12:59.828192    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.828192    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:12:59.832003    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:12:59.860258    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.860258    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:12:59.863635    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:12:59.893207    6296 logs.go:282] 0 containers: []
	W1217 02:12:59.893207    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:12:59.893207    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:12:59.893207    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:12:59.958927    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:12:59.958927    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:00.004703    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:00.004703    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:00.096612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:00.084050   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.085145   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.086221   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.088049   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:00.090502   16540 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:00.096612    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:00.096612    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:00.124914    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:00.124975    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:02.682962    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:02.708543    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:02.737663    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.737663    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:02.741817    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:02.772482    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.772482    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:02.778562    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:02.806978    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.806978    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:02.813021    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:02.845688    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.845688    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:02.851578    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:02.880144    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.880200    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:02.883811    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:02.918466    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.918544    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:02.922186    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:02.951702    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.951702    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:02.955491    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:02.984638    6296 logs.go:282] 0 containers: []
	W1217 02:13:02.984638    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:02.984638    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:02.984638    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:03.047941    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:03.047941    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:03.086964    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:03.086964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:03.173007    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:03.161327   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.162497   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.163381   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.165030   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:03.166441   16700 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:03.173086    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:03.173086    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:03.202017    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:03.202544    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:05.761010    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:05.786319    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:05.819785    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.819785    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:05.825532    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:05.853318    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.853318    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:05.858274    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:05.887613    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.887613    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:05.891162    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:05.919471    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.919471    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:05.922933    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:05.955441    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.955441    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:05.959241    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:05.984925    6296 logs.go:282] 0 containers: []
	W1217 02:13:05.984925    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:05.989009    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:06.021101    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.021101    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:06.024383    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:06.055098    6296 logs.go:282] 0 containers: []
	W1217 02:13:06.055098    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:06.055098    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:06.055098    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:06.107743    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:06.107743    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:06.170319    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:06.170319    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:06.210360    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:06.210360    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:06.299194    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:06.288404   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.289415   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.292346   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.293307   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:06.294574   16875 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:06.299194    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:06.299194    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:08.832901    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:08.860263    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:08.890111    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.890111    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:08.893617    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:08.921989    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.921989    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:08.925561    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:08.952883    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.952883    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:08.959516    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:08.991347    6296 logs.go:282] 0 containers: []
	W1217 02:13:08.991347    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:08.995066    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:09.028011    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.028011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:09.032096    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:09.060803    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.060803    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:09.064596    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:09.093542    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.093572    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:09.096987    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:09.123594    6296 logs.go:282] 0 containers: []
	W1217 02:13:09.123615    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:09.123615    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:09.123615    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:09.176222    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:09.176222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:09.238935    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:09.238935    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:09.278804    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:09.278804    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:09.367283    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:09.355984   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.356989   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.358233   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.359697   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:09.360921   17033 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:09.367283    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:09.367283    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:11.901781    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:11.930493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:11.963534    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.963534    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:11.967747    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:11.997700    6296 logs.go:282] 0 containers: []
	W1217 02:13:11.997700    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:12.001601    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:12.031862    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.031862    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:12.035544    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:12.066506    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.066506    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:12.071472    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:12.103184    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.103184    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:12.107033    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:12.135713    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.135713    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:12.139268    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:12.170350    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.170350    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:12.174053    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:12.202964    6296 logs.go:282] 0 containers: []
	W1217 02:13:12.202964    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:12.202964    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:12.202964    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:12.252669    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:12.253197    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:12.318088    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:12.318088    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:12.356768    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:12.356768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:12.443857    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:12.431867   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.432694   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.435515   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.436810   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:12.439065   17191 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:12.443857    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:12.443857    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:14.980350    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:15.007303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:15.040020    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.040100    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:15.043303    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:15.073502    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.073502    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:15.077944    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:15.106871    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.106871    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:15.110453    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:15.138071    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.138095    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:15.141547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:15.171602    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.171659    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:15.175341    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:15.207140    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.207181    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:15.210547    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:15.243222    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.243222    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:15.247103    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:15.280156    6296 logs.go:282] 0 containers: []
	W1217 02:13:15.280232    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:15.280232    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:15.280232    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:15.342862    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:15.342862    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:15.384022    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:15.384022    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:15.469724    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:15.457538   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.458755   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.461376   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.463262   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:15.464126   17337 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:15.469766    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:15.469807    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:15.497606    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:15.497667    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:18.064895    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:18.090410    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:18.123378    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.123429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:18.127331    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:18.157210    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.157210    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:18.160924    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:18.191242    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.191242    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:18.195064    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:18.222561    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.222561    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:18.226125    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:18.255891    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.255891    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:18.259860    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:18.288868    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.288868    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:18.292834    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:18.322668    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.322668    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:18.325666    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:18.353052    6296 logs.go:282] 0 containers: []
	W1217 02:13:18.353052    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:18.353052    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:18.353052    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:18.418504    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:18.418504    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:18.457348    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:18.457348    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:18.568946    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:18.539845   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.540709   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.559501   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.563750   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:18.565031   17499 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:18.569003    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:18.569003    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:18.602236    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:18.602236    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:21.158752    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:21.184475    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:21.214582    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.214582    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:21.218375    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:21.245604    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.245604    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:21.249850    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:21.281360    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.281360    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:21.286501    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:21.318549    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.318601    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:21.322609    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:21.353429    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.353460    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:21.357031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:21.391028    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.391028    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:21.394206    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:21.423584    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.423584    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:21.427599    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:21.458683    6296 logs.go:282] 0 containers: []
	W1217 02:13:21.458683    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:21.458683    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:21.458683    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:21.526430    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:21.526430    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:21.565490    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:21.565490    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:21.656323    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:21.643307   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.644610   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.648760   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.649980   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:21.650911   17670 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:21.656323    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:21.656323    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:21.689700    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:21.689700    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:24.246630    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:24.280925    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:24.322972    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.322972    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:24.326768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:24.355732    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.355732    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:24.359957    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:24.391937    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.392009    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:24.395559    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:24.427388    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.427388    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:24.431126    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:24.459891    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.459966    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:24.463468    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:24.491009    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.491009    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:24.494465    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:24.524468    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.524468    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:24.528017    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:24.568815    6296 logs.go:282] 0 containers: []
	W1217 02:13:24.568815    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:24.568815    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:24.568815    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:24.632772    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:24.632772    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:24.671731    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:24.671731    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:24.755604    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:24.747209   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.748169   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.750016   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.751205   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:24.752643   17825 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:24.755604    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:24.755604    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:24.784599    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:24.784660    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:27.338272    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:27.366367    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:27.395715    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.395715    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:27.399158    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:27.427362    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.427362    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:27.430752    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:27.461990    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.461990    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:27.465748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:27.492985    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.492985    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:27.497176    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:27.528724    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.528724    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:27.532970    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:27.571655    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.571655    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:27.575466    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:27.604007    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.604068    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:27.608062    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:27.639624    6296 logs.go:282] 0 containers: []
	W1217 02:13:27.639689    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:27.639735    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:27.639735    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:27.705896    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:27.705896    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:27.745294    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:27.745294    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:27.827462    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:27.817987   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.819077   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.820142   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.821119   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:27.823572   17984 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:27.827462    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:27.827462    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:27.854463    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:27.854559    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:30.412283    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:30.438474    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:30.469848    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.469848    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:30.473330    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:30.501713    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.501713    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:30.505748    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:30.535870    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.535870    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:30.540177    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:30.572310    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.572310    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:30.576644    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:30.607087    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.607087    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:30.610334    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:30.640168    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.640168    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:30.643628    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:30.671132    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.671132    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:30.677927    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:30.708536    6296 logs.go:282] 0 containers: []
	W1217 02:13:30.708536    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:30.708536    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:30.708536    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:30.773222    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:30.773222    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:30.812763    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:30.812763    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:30.932347    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:30.917907   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.918960   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.921632   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.923322   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:30.925337   18144 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:30.932397    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:30.932444    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:30.961663    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:30.961663    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.524404    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:33.548624    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:33.580753    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.580845    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:33.583912    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:33.613001    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.613048    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:33.616808    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:33.645262    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.645262    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:33.649044    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:33.677477    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.677562    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:33.681205    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:33.710607    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.710669    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:33.714600    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:33.742889    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.742889    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:33.746623    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:33.777022    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.777022    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:33.780455    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:33.809525    6296 logs.go:282] 0 containers: []
	W1217 02:13:33.809525    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:33.809525    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:33.809525    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:33.860852    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:33.860936    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:33.924768    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:33.924768    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:33.962632    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:33.962632    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:34.054124    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:34.042221   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.043292   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.044548   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.046184   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:34.047237   18316 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:34.054124    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:34.054124    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.589465    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:36.617658    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:36.652432    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.652432    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:36.656189    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:36.694709    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.694709    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:36.700040    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:36.729913    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.729913    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:36.733870    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:36.762591    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.762591    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:36.766493    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:36.796414    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.796414    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:36.800540    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:36.828148    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.828148    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:36.833323    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:36.862390    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.862390    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:36.866173    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:36.895727    6296 logs.go:282] 0 containers: []
	W1217 02:13:36.895814    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:36.895814    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:36.895814    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:36.926240    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:36.926240    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:36.975760    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:36.975760    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:37.036350    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:37.036350    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:37.072745    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:37.072745    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:37.161612    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:37.149826   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.150994   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.152971   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.154071   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:37.155248   18476 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:39.667288    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:39.691212    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:39.724148    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.724148    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:39.727935    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:39.761821    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.761821    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:39.765852    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:39.793659    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.793696    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:39.797422    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:39.825439    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.825473    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:39.828751    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:39.859011    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.859011    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:39.862518    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:39.891552    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.891613    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:39.894978    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:39.926857    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.926857    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:39.930363    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:39.975835    6296 logs.go:282] 0 containers: []
	W1217 02:13:39.975835    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:39.975835    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:39.975835    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:40.070107    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:40.058472   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.059584   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.060546   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.062682   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:40.064347   18613 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:40.070107    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:40.070107    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:40.098563    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:40.098605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:40.147476    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:40.147476    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:40.212702    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:40.212702    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:42.757339    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:42.786178    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:42.817429    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.817429    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:42.821164    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:42.850363    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.850415    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:42.854031    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:42.881774    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.881774    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:42.885802    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:42.915556    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.915556    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:42.919184    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:42.948329    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.948329    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:42.952430    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:42.982355    6296 logs.go:282] 0 containers: []
	W1217 02:13:42.982355    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:42.986768    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:43.017700    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.017700    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:43.021284    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:43.052749    6296 logs.go:282] 0 containers: []
	W1217 02:13:43.052779    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:43.052779    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:43.052813    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:43.091605    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:43.091605    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:43.175861    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:43.162839   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.163916   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.164763   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.167177   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:43.170134   18773 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:43.175861    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:43.175861    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:43.204569    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:43.204569    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:43.257132    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:43.257132    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:45.825092    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:45.853653    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:45.886780    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.886780    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:45.890416    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:45.921840    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.923184    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:45.928382    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:45.960187    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.960252    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:45.963959    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:45.993658    6296 logs.go:282] 0 containers: []
	W1217 02:13:45.993712    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:45.997113    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:46.024308    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.024308    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:46.027994    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:46.060725    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.060725    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:46.064446    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:46.092825    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.092825    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:46.098256    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:46.129614    6296 logs.go:282] 0 containers: []
	W1217 02:13:46.129688    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:46.129688    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:46.129688    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:46.216242    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:46.204904   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.206123   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.207788   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.210288   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:46.211623   18931 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:46.216263    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:46.216263    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:46.248767    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:46.248767    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:46.298044    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:46.298044    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:46.363186    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:46.363186    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:48.911992    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:48.946588    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}
	I1217 02:13:48.983880    6296 logs.go:282] 0 containers: []
	W1217 02:13:48.983880    6296 logs.go:284] No container was found matching "kube-apiserver"
	I1217 02:13:48.987999    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_etcd --format={{.ID}}
	I1217 02:13:49.017254    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.017254    6296 logs.go:284] No container was found matching "etcd"
	I1217 02:13:49.021239    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_coredns --format={{.ID}}
	I1217 02:13:49.053619    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.053619    6296 logs.go:284] No container was found matching "coredns"
	I1217 02:13:49.057711    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}
	I1217 02:13:49.086289    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.086289    6296 logs.go:284] No container was found matching "kube-scheduler"
	I1217 02:13:49.090230    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}
	I1217 02:13:49.123069    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.123069    6296 logs.go:284] No container was found matching "kube-proxy"
	I1217 02:13:49.130107    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}
	I1217 02:13:49.158724    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.158724    6296 logs.go:284] No container was found matching "kube-controller-manager"
	I1217 02:13:49.162733    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}
	I1217 02:13:49.193515    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.193573    6296 logs.go:284] No container was found matching "kindnet"
	I1217 02:13:49.197116    6296 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_kubernetes-dashboard --format={{.ID}}
	I1217 02:13:49.230153    6296 logs.go:282] 0 containers: []
	W1217 02:13:49.230201    6296 logs.go:284] No container was found matching "kubernetes-dashboard"
	I1217 02:13:49.230245    6296 logs.go:123] Gathering logs for Docker ...
	I1217 02:13:49.230245    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u docker -u cri-docker -n 400"
	I1217 02:13:49.259747    6296 logs.go:123] Gathering logs for container status ...
	I1217 02:13:49.259747    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1217 02:13:49.312360    6296 logs.go:123] Gathering logs for kubelet ...
	I1217 02:13:49.312456    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I1217 02:13:49.375035    6296 logs.go:123] Gathering logs for dmesg ...
	I1217 02:13:49.375035    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1217 02:13:49.413908    6296 logs.go:123] Gathering logs for describe nodes ...
	I1217 02:13:49.413908    6296 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W1217 02:13:49.508187    6296 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	E1217 02:13:49.496893   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.499745   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.502343   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.503338   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:13:49.504593   19127 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I1217 02:13:52.012834    6296 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 02:13:52.037104    6296 out.go:203] 
	W1217 02:13:52.039462    6296 out.go:285] X Exiting due to K8S_APISERVER_MISSING: wait 6m0s for node: wait for apiserver proc: apiserver process never appeared
	W1217 02:13:52.039520    6296 out.go:285] * Suggestion: Check that the provided apiserver flags are valid, and that SELinux is disabled
	W1217 02:13:52.039588    6296 out.go:285] * Related issues:
	W1217 02:13:52.039588    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/4536
	W1217 02:13:52.039635    6296 out.go:285]   - https://github.com/kubernetes/minikube/issues/6014
	I1217 02:13:52.041923    6296 out.go:203] 
	
	
	==> Docker <==
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325544488Z" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325628897Z" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325641498Z" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325647799Z" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325653800Z" level=warning msg="WARNING: Support for cgroup v1 is deprecated and planned to be removed by no later than May 2029 (https://github.com/moby/moby/issues/51111)"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325676802Z" level=info msg="Docker daemon" commit=fbf3ed2 containerd-snapshotter=false storage-driver=overlay2 version=29.1.3
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.325716506Z" level=info msg="Initializing buildkit"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.423454913Z" level=info msg="Completed buildkit initialization"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434194190Z" level=info msg="Daemon has completed initialization"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434389711Z" level=info msg="API listen on [::]:2376"
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434491222Z" level=info msg="API listen on /var/run/docker.sock"
	Dec 17 02:05:13 no-preload-184000 systemd[1]: Started docker.service - Docker Application Container Engine.
	Dec 17 02:05:13 no-preload-184000 dockerd[935]: time="2025-12-17T02:05:13.434476421Z" level=info msg="API listen on /run/docker.sock"
	Dec 17 02:05:14 no-preload-184000 systemd[1]: Starting cri-docker.service - CRI Interface for Docker Application Container Engine...
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Starting cri-dockerd dev (HEAD)"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Connecting to docker on the Endpoint unix:///var/run/docker.sock"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Start docker client with request timeout 0s"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Hairpin mode is set to hairpin-veth"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Loaded network plugin cni"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Docker cri networking managed by network plugin cni"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Setting cgroupDriver cgroupfs"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:,},}"
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Starting the GRPC backend for the Docker CRI interface."
	Dec 17 02:05:14 no-preload-184000 cri-dockerd[1232]: time="2025-12-17T02:05:14Z" level=info msg="Start cri-dockerd grpc backend"
	Dec 17 02:05:14 no-preload-184000 systemd[1]: Started cri-docker.service - CRI Interface for Docker Application Container Engine.
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD                 NAMESPACE
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.35.0-beta.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	E1217 02:23:58.841022   20784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:58.842246   20784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:58.843718   20784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:58.846471   20784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	E1217 02:23:58.847759   20784 memcache.go:265] "Unhandled Error" err="couldn't get current server API group list: Get \"https://localhost:8443/api?timeout=32s\": dial tcp [::1]:8443: connect: connection refused"
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +5.752411] CPU: 12 PID: 469779 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7f8b9b910b20
	[  +0.000008] Code: Unable to access opcode bytes at RIP 0x7f8b9b910af6.
	[  +0.000001] RSP: 002b:00007fffc85e9670 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000003] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000001] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	[  +0.875329] CPU: 10 PID: 469916 Comm: exe Not tainted 5.15.153.1-microsoft-standard-WSL2 #1
	[  +0.000004] RIP: 0033:0x7fdfac8dab20
	[  +0.000007] Code: Unable to access opcode bytes at RIP 0x7fdfac8daaf6.
	[  +0.000001] RSP: 002b:00007ffd587a0060 EFLAGS: 00000200 ORIG_RAX: 000000000000003b
	[  +0.000002] RAX: 0000000000000000 RBX: 0000000000000000 RCX: 0000000000000000
	[  +0.000002] RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
	[  +0.000001] RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
	[  +0.000001] R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
	[  +0.000001] R13: 0000000000000000 R14: 0000000000000000 R15: 0000000000000000
	[  +0.000001] FS:  0000000000000000 GS:  0000000000000000
	
	
	==> kernel <==
	 02:23:58 up  2:43,  0 user,  load average: 0.44, 0.44, 1.23
	Linux no-preload-184000 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kubelet <==
	Dec 17 02:23:55 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:55 no-preload-184000 kubelet[20599]: E1217 02:23:55.989715   20599 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:55 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:55 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:56 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1495.
	Dec 17 02:23:56 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:56 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:56 no-preload-184000 kubelet[20627]: E1217 02:23:56.786461   20627 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:56 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:56 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:57 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1496.
	Dec 17 02:23:57 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:57 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:57 no-preload-184000 kubelet[20655]: E1217 02:23:57.499061   20655 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:57 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:57 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:58 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1497.
	Dec 17 02:23:58 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:58 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:58 no-preload-184000 kubelet[20667]: E1217 02:23:58.261828   20667 run.go:72] "command failed" err="failed to validate kubelet configuration, error: kubelet is configured to not run on a host using cgroup v1. cgroup v1 support is unsupported and will be removed in a future release, path: &TypeMeta{Kind:,APIVersion:,}"
	Dec 17 02:23:58 no-preload-184000 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE
	Dec 17 02:23:58 no-preload-184000 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Dec 17 02:23:58 no-preload-184000 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1498.
	Dec 17 02:23:58 no-preload-184000 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Dec 17 02:23:58 no-preload-184000 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:263: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000
helpers_test.go:263: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p no-preload-184000 -n no-preload-184000: exit status 2 (595.4177ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:263: status error: exit status 2 (may be ok)
helpers_test.go:265: "no-preload-184000" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (215.34s)

                                                
                                    

Test pass (357/427)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 7.21
4 TestDownloadOnly/v1.28.0/preload-exists 0.04
7 TestDownloadOnly/v1.28.0/kubectl 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.35
9 TestDownloadOnly/v1.28.0/DeleteAll 1.07
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.87
12 TestDownloadOnly/v1.34.2/json-events 5.03
13 TestDownloadOnly/v1.34.2/preload-exists 0
16 TestDownloadOnly/v1.34.2/kubectl 0
17 TestDownloadOnly/v1.34.2/LogsDuration 0.41
18 TestDownloadOnly/v1.34.2/DeleteAll 0.68
19 TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds 0.74
21 TestDownloadOnly/v1.35.0-beta.0/json-events 5.57
22 TestDownloadOnly/v1.35.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.35.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.35.0-beta.0/LogsDuration 0.22
27 TestDownloadOnly/v1.35.0-beta.0/DeleteAll 1.02
28 TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds 0.46
29 TestDownloadOnlyKic 2.06
30 TestBinaryMirror 2.22
31 TestOffline 117.36
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.19
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.21
36 TestAddons/Setup 298.56
38 TestAddons/serial/Volcano 51.47
40 TestAddons/serial/GCPAuth/Namespaces 0.25
41 TestAddons/serial/GCPAuth/FakeCredentials 10.19
45 TestAddons/parallel/RegistryCreds 1.28
47 TestAddons/parallel/InspektorGadget 12.14
48 TestAddons/parallel/MetricsServer 7.7
50 TestAddons/parallel/CSI 39.64
51 TestAddons/parallel/Headlamp 37.37
52 TestAddons/parallel/CloudSpanner 7.43
53 TestAddons/parallel/LocalPath 15.18
54 TestAddons/parallel/NvidiaDevicePlugin 6.82
55 TestAddons/parallel/Yakd 13.01
56 TestAddons/parallel/AmdGpuDevicePlugin 6.45
57 TestAddons/StoppedEnableDisable 12.89
58 TestCertOptions 57.41
59 TestCertExpiration 285.59
60 TestDockerFlags 74.75
62 TestForceSystemdEnv 60.15
68 TestErrorSpam/start 2.61
69 TestErrorSpam/status 2.17
70 TestErrorSpam/pause 2.62
71 TestErrorSpam/unpause 2.59
72 TestErrorSpam/stop 19.32
75 TestFunctional/serial/CopySyncFile 0.04
76 TestFunctional/serial/StartWithProxy 85.57
77 TestFunctional/serial/AuditLog 0
78 TestFunctional/serial/SoftStart 51.46
79 TestFunctional/serial/KubeContext 0.09
80 TestFunctional/serial/KubectlGetPods 0.26
83 TestFunctional/serial/CacheCmd/cache/add_remote 10.5
84 TestFunctional/serial/CacheCmd/cache/add_local 4.39
85 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.21
86 TestFunctional/serial/CacheCmd/cache/list 0.19
87 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.59
88 TestFunctional/serial/CacheCmd/cache/cache_reload 4.56
89 TestFunctional/serial/CacheCmd/cache/delete 0.38
90 TestFunctional/serial/MinikubeKubectlCmd 0.38
91 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.83
92 TestFunctional/serial/ExtraConfig 47.61
93 TestFunctional/serial/ComponentHealth 0.14
94 TestFunctional/serial/LogsCmd 1.89
95 TestFunctional/serial/LogsFileCmd 1.92
96 TestFunctional/serial/InvalidService 5.06
98 TestFunctional/parallel/ConfigCmd 1.19
100 TestFunctional/parallel/DryRun 1.52
101 TestFunctional/parallel/InternationalLanguage 0.67
102 TestFunctional/parallel/StatusCmd 1.96
107 TestFunctional/parallel/AddonsCmd 0.42
108 TestFunctional/parallel/PersistentVolumeClaim 26.76
110 TestFunctional/parallel/SSHCmd 1.18
111 TestFunctional/parallel/CpCmd 3.16
112 TestFunctional/parallel/MySQL 71.24
113 TestFunctional/parallel/FileSync 0.56
114 TestFunctional/parallel/CertSync 3.32
118 TestFunctional/parallel/NodeLabels 0.13
120 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
122 TestFunctional/parallel/License 1.56
123 TestFunctional/parallel/ServiceCmd/DeployApp 8.32
124 TestFunctional/parallel/ProfileCmd/profile_not_create 1.03
125 TestFunctional/parallel/ProfileCmd/profile_list 0.86
126 TestFunctional/parallel/ProfileCmd/profile_json_output 0.99
128 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.69
129 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
131 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 15.36
132 TestFunctional/parallel/Version/short 0.18
133 TestFunctional/parallel/Version/components 3.49
134 TestFunctional/parallel/ImageCommands/ImageListShort 0.46
135 TestFunctional/parallel/ImageCommands/ImageListTable 0.55
136 TestFunctional/parallel/ImageCommands/ImageListJson 0.45
137 TestFunctional/parallel/ImageCommands/ImageListYaml 0.52
138 TestFunctional/parallel/ImageCommands/ImageBuild 10.06
139 TestFunctional/parallel/ImageCommands/Setup 1.8
140 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 4.33
141 TestFunctional/parallel/ServiceCmd/List 0.67
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.81
143 TestFunctional/parallel/ServiceCmd/HTTPS 15.02
144 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.86
145 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 3.54
146 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.67
147 TestFunctional/parallel/ImageCommands/ImageRemove 0.91
148 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.13
153 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
154 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.13
155 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.99
156 TestFunctional/parallel/DockerEnv/powershell 5.36
157 TestFunctional/parallel/ServiceCmd/Format 15.03
158 TestFunctional/parallel/UpdateContextCmd/no_changes 0.34
159 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.31
160 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.33
161 TestFunctional/parallel/ServiceCmd/URL 15.01
162 TestFunctional/delete_echo-server_images 0.15
163 TestFunctional/delete_my-image_image 0.06
164 TestFunctional/delete_minikube_cached_images 0.07
168 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile 0
170 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog 0
172 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext 0.1
176 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote 10.21
177 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local 3.85
178 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete 0.18
179 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list 0.18
180 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node 0.58
181 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload 4.57
182 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete 0.37
187 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd 1.25
188 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd 1.38
191 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd 1.09
193 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun 1.61
194 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage 0.69
200 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd 0.4
203 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd 1.13
204 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd 3.11
206 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync 0.54
207 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync 3.19
213 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled 0.53
215 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License 2.28
222 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel 0
231 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel 0
233 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes 0.29
234 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster 0.34
235 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters 0.31
236 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create 0.88
237 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list 0.82
238 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output 0.8
239 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short 0.18
240 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components 1.8
241 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort 0.43
242 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable 0.45
243 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson 0.46
244 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml 0.45
245 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild 5.37
246 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup 0.85
247 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon 3.3
248 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon 2.78
249 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon 3.52
250 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile 0.67
251 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove 0.9
252 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile 1.18
253 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon 0.86
254 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images 0.14
255 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image 0.06
256 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images 0.05
260 TestMultiControlPlane/serial/StartCluster 241.54
261 TestMultiControlPlane/serial/DeployApp 10.6
262 TestMultiControlPlane/serial/PingHostFromPods 2.5
263 TestMultiControlPlane/serial/AddWorkerNode 55.74
264 TestMultiControlPlane/serial/NodeLabels 0.14
265 TestMultiControlPlane/serial/HAppyAfterClusterStart 2
266 TestMultiControlPlane/serial/CopyFile 33.8
267 TestMultiControlPlane/serial/StopSecondaryNode 13.51
268 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 1.56
269 TestMultiControlPlane/serial/RestartSecondaryNode 98.16
270 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 3.44
271 TestMultiControlPlane/serial/RestartClusterKeepsNodes 168.94
272 TestMultiControlPlane/serial/DeleteSecondaryNode 14.9
273 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 1.49
274 TestMultiControlPlane/serial/StopCluster 37.76
275 TestMultiControlPlane/serial/RestartCluster 85.99
276 TestMultiControlPlane/serial/DegradedAfterClusterRestart 1.53
277 TestMultiControlPlane/serial/AddSecondaryNode 84.86
278 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 2.01
281 TestImageBuild/serial/Setup 52.06
282 TestImageBuild/serial/NormalBuild 4.65
283 TestImageBuild/serial/BuildWithBuildArg 2.17
284 TestImageBuild/serial/BuildWithDockerIgnore 1.33
285 TestImageBuild/serial/BuildWithSpecifiedDockerfile 1.28
290 TestJSONOutput/start/Command 79.72
291 TestJSONOutput/start/Audit 0
293 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
294 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
296 TestJSONOutput/pause/Command 1.07
297 TestJSONOutput/pause/Audit 0
299 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
300 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
302 TestJSONOutput/unpause/Command 0.95
303 TestJSONOutput/unpause/Audit 0
305 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
306 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
308 TestJSONOutput/stop/Command 12.12
309 TestJSONOutput/stop/Audit 0
311 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
312 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
313 TestErrorJSONOutput 0.66
315 TestKicCustomNetwork/create_custom_network 54.63
316 TestKicCustomNetwork/use_default_bridge_network 52.67
317 TestKicExistingNetwork 54.44
318 TestKicCustomSubnet 52.58
319 TestKicStaticIP 56.87
320 TestMainNoArgs 0.16
321 TestMinikubeProfile 102
324 TestMountStart/serial/StartWithMountFirst 13.89
325 TestMountStart/serial/VerifyMountFirst 0.56
326 TestMountStart/serial/StartWithMountSecond 13.55
327 TestMountStart/serial/VerifyMountSecond 0.53
328 TestMountStart/serial/DeleteFirst 2.43
329 TestMountStart/serial/VerifyMountPostDelete 0.57
330 TestMountStart/serial/Stop 1.86
331 TestMountStart/serial/RestartStopped 10.85
332 TestMountStart/serial/VerifyMountPostStop 0.56
335 TestMultiNode/serial/FreshStart2Nodes 134.15
336 TestMultiNode/serial/DeployApp2Nodes 7.03
337 TestMultiNode/serial/PingHostFrom2Pods 1.75
338 TestMultiNode/serial/AddNode 53.69
339 TestMultiNode/serial/MultiNodeLabels 0.14
340 TestMultiNode/serial/ProfileList 1.39
341 TestMultiNode/serial/CopyFile 19.19
342 TestMultiNode/serial/StopNode 3.75
343 TestMultiNode/serial/StartAfterStop 13.45
344 TestMultiNode/serial/RestartKeepsNodes 89.3
345 TestMultiNode/serial/DeleteNode 8.35
346 TestMultiNode/serial/StopMultiNode 23.98
347 TestMultiNode/serial/RestartMultiNode 60.2
348 TestMultiNode/serial/ValidateNameConflict 48.23
352 TestPreload 159.68
353 TestScheduledStopWindows 116
357 TestInsufficientStorage 28.43
358 TestRunningBinaryUpgrade 373.29
361 TestMissingContainerUpgrade 233.18
364 TestNoKubernetes/serial/StartNoK8sWithVersion 0.25
375 TestNoKubernetes/serial/StartWithK8s 75.16
376 TestNoKubernetes/serial/StartWithStopK8s 25.61
377 TestNoKubernetes/serial/Start 15.21
378 TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads 0
379 TestNoKubernetes/serial/VerifyK8sNotRunning 0.87
380 TestNoKubernetes/serial/ProfileList 3.48
381 TestNoKubernetes/serial/Stop 2.32
382 TestNoKubernetes/serial/StartNoArgs 11.6
383 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.68
384 TestStoppedBinaryUpgrade/Setup 1.79
385 TestStoppedBinaryUpgrade/Upgrade 436.25
394 TestPause/serial/Start 86.29
395 TestPause/serial/SecondStartNoReconfiguration 47.44
396 TestPause/serial/Pause 1.05
397 TestPause/serial/VerifyStatus 0.64
398 TestPause/serial/Unpause 0.86
399 TestPause/serial/PauseAgain 1.3
400 TestPause/serial/DeletePaused 3.98
401 TestPause/serial/VerifyDeletedResources 1.81
402 TestNetworkPlugins/group/auto/Start 91.04
403 TestStoppedBinaryUpgrade/MinikubeLogs 1.47
404 TestNetworkPlugins/group/calico/Start 104.76
405 TestNetworkPlugins/group/auto/KubeletFlags 0.56
406 TestNetworkPlugins/group/auto/NetCatPod 14.48
407 TestNetworkPlugins/group/auto/DNS 0.25
408 TestNetworkPlugins/group/auto/Localhost 0.21
409 TestNetworkPlugins/group/auto/HairPin 0.22
410 TestNetworkPlugins/group/custom-flannel/Start 78.44
411 TestNetworkPlugins/group/calico/ControllerPod 5.02
412 TestNetworkPlugins/group/calico/KubeletFlags 0.58
413 TestNetworkPlugins/group/calico/NetCatPod 15.48
414 TestNetworkPlugins/group/calico/DNS 0.28
415 TestNetworkPlugins/group/calico/Localhost 0.23
416 TestNetworkPlugins/group/calico/HairPin 0.21
417 TestNetworkPlugins/group/false/Start 90.79
418 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.59
419 TestNetworkPlugins/group/custom-flannel/NetCatPod 15.83
420 TestNetworkPlugins/group/kindnet/Start 91.16
421 TestNetworkPlugins/group/custom-flannel/DNS 0.24
422 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
423 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
424 TestNetworkPlugins/group/flannel/Start 71.69
425 TestNetworkPlugins/group/false/KubeletFlags 0.64
426 TestNetworkPlugins/group/false/NetCatPod 23.26
427 TestNetworkPlugins/group/false/DNS 0.24
428 TestNetworkPlugins/group/false/Localhost 0.2
429 TestNetworkPlugins/group/false/HairPin 0.21
430 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
431 TestNetworkPlugins/group/kindnet/KubeletFlags 0.58
432 TestNetworkPlugins/group/kindnet/NetCatPod 15.65
433 TestNetworkPlugins/group/kindnet/DNS 0.25
434 TestNetworkPlugins/group/kindnet/Localhost 0.21
435 TestNetworkPlugins/group/kindnet/HairPin 0.22
436 TestNetworkPlugins/group/enable-default-cni/Start 90.08
437 TestNetworkPlugins/group/flannel/ControllerPod 6.01
438 TestNetworkPlugins/group/flannel/KubeletFlags 0.55
439 TestNetworkPlugins/group/flannel/NetCatPod 23.63
440 TestNetworkPlugins/group/flannel/DNS 0.27
441 TestNetworkPlugins/group/flannel/Localhost 0.24
442 TestNetworkPlugins/group/flannel/HairPin 0.21
443 TestNetworkPlugins/group/bridge/Start 85.54
444 TestNetworkPlugins/group/kubenet/Start 90.44
445 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.68
446 TestNetworkPlugins/group/enable-default-cni/NetCatPod 15.52
447 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
448 TestNetworkPlugins/group/enable-default-cni/Localhost 0.21
449 TestNetworkPlugins/group/enable-default-cni/HairPin 0.22
450 TestNetworkPlugins/group/bridge/KubeletFlags 0.61
451 TestNetworkPlugins/group/bridge/NetCatPod 14.54
452 TestNetworkPlugins/group/bridge/DNS 0.27
453 TestNetworkPlugins/group/bridge/Localhost 0.25
454 TestNetworkPlugins/group/bridge/HairPin 0.22
456 TestStartStop/group/old-k8s-version/serial/FirstStart 109.72
457 TestNetworkPlugins/group/kubenet/KubeletFlags 0.62
458 TestNetworkPlugins/group/kubenet/NetCatPod 14.48
461 TestNetworkPlugins/group/kubenet/DNS 0.29
462 TestNetworkPlugins/group/kubenet/Localhost 0.23
463 TestNetworkPlugins/group/kubenet/HairPin 0.2
465 TestStartStop/group/embed-certs/serial/FirstStart 84.09
467 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 79.09
468 TestStartStop/group/old-k8s-version/serial/DeployApp 10.68
469 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.88
470 TestStartStop/group/old-k8s-version/serial/Stop 12.13
471 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.52
472 TestStartStop/group/old-k8s-version/serial/SecondStart 47.73
473 TestStartStop/group/embed-certs/serial/DeployApp 9.51
474 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.62
475 TestStartStop/group/embed-certs/serial/Stop 12.41
476 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.54
477 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.55
478 TestStartStop/group/embed-certs/serial/SecondStart 60.14
479 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.7
480 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.5
481 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.6
482 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 60.83
483 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 8.01
484 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.45
485 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.54
486 TestStartStop/group/old-k8s-version/serial/Pause 5.16
489 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.02
490 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.33
491 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.46
492 TestStartStop/group/embed-certs/serial/Pause 5.14
493 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
494 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.31
495 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.46
496 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.93
499 TestStartStop/group/no-preload/serial/Stop 1.87
500 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.55
502 TestStartStop/group/newest-cni/serial/DeployApp 0
504 TestStartStop/group/newest-cni/serial/Stop 1.88
505 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.52
508 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
509 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
510 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.46
x
+
TestDownloadOnly/v1.28.0/json-events (7.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-684900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-684900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker: (7.2046344s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (7.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1217 00:05:12.157490    4168 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
I1217 00:05:12.200878    4168 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.04s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
--- PASS: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-684900
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-684900: exit status 85 (349.0993ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-684900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-684900 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:05:05
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:05:05.020043    5788 out.go:360] Setting OutFile to fd 668 ...
	I1217 00:05:05.062309    5788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:05:05.062309    5788 out.go:374] Setting ErrFile to fd 672...
	I1217 00:05:05.062309    5788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	W1217 00:05:05.073023    5788 root.go:314] Error reading config file at C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I1217 00:05:05.080197    5788 out.go:368] Setting JSON to true
	I1217 00:05:05.082601    5788 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1493,"bootTime":1765928411,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:05:05.082601    5788 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:05:05.088107    5788 out.go:99] [download-only-684900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	W1217 00:05:05.088107    5788 preload.go:354] Failed to list preload files: open C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I1217 00:05:05.088107    5788 notify.go:221] Checking for updates...
	I1217 00:05:05.091326    5788 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:05:05.093854    5788 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:05:05.095962    5788 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:05:05.098357    5788 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 00:05:05.103341    5788 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:05:05.103868    5788 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:05:05.324836    5788 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:05:05.327848    5788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:05:06.022076    5788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-17 00:05:05.996093249 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:05:06.026069    5788 out.go:99] Using the docker driver based on user configuration
	I1217 00:05:06.026069    5788 start.go:309] selected driver: docker
	I1217 00:05:06.026069    5788 start.go:927] validating driver "docker" against <nil>
	I1217 00:05:06.032069    5788 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:05:06.279550    5788 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-17 00:05:06.258913877 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:05:06.280192    5788 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:05:06.332328    5788 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1217 00:05:06.333397    5788 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:05:06.336987    5788 out.go:171] Using Docker Desktop driver with root privileges
	I1217 00:05:06.338883    5788 cni.go:84] Creating CNI manager for ""
	I1217 00:05:06.338883    5788 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:05:06.338883    5788 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 00:05:06.338883    5788 start.go:353] cluster config:
	{Name:download-only-684900 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.0 ClusterName:download-only-684900 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:05:06.343158    5788 out.go:99] Starting "download-only-684900" primary control-plane node in "download-only-684900" cluster
	I1217 00:05:06.343158    5788 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:05:06.345262    5788 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:05:06.345262    5788 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 00:05:06.345262    5788 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:05:06.381732    5788 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	I1217 00:05:06.381732    5788 cache.go:65] Caching tarball of preloaded images
	I1217 00:05:06.382333    5788 preload.go:188] Checking if preload exists for k8s version v1.28.0 and runtime docker
	I1217 00:05:06.386083    5788 out.go:99] Downloading Kubernetes v1.28.0 preload ...
	I1217 00:05:06.386083    5788 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 00:05:06.400662    5788 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:05:06.401223    5788 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1217 00:05:06.401223    5788 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1217 00:05:06.401223    5788 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:05:06.402190    5788 image.go:150] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:05:06.462264    5788 preload.go:295] Got checksum from GCS API "8a955be835827bc584bcce0658a7fcc9"
	I1217 00:05:06.462264    5788 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.28.0/preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4?checksum=md5:8a955be835827bc584bcce0658a7fcc9 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.28.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-684900 host does not exist
	  To start a cluster, run: "minikube start -p download-only-684900"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (1.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0655779s)
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (1.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.87s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-684900
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.87s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/json-events (5.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-265700 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-265700 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker: (5.0296519s)
--- PASS: TestDownloadOnly/v1.34.2/json-events (5.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/preload-exists
I1217 00:05:19.519515    4168 preload.go:188] Checking if preload exists for k8s version v1.34.2 and runtime docker
I1217 00:05:19.519631    4168 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.34.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/kubectl
--- PASS: TestDownloadOnly/v1.34.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/LogsDuration (0.41s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-265700
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-265700: exit status 85 (405.4838ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                       ARGS                                                                        │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-684900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker │ download-only-684900 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │                     │
	│ delete  │ --all                                                                                                                                             │ minikube             │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │ 17 Dec 25 00:05 UTC │
	│ delete  │ -p download-only-684900                                                                                                                           │ download-only-684900 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │ 17 Dec 25 00:05 UTC │
	│ start   │ -o=json --download-only -p download-only-265700 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker │ download-only-265700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:05:14
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:05:14.561986    7680 out.go:360] Setting OutFile to fd 800 ...
	I1217 00:05:14.606493    7680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:05:14.606493    7680 out.go:374] Setting ErrFile to fd 804...
	I1217 00:05:14.606493    7680 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:05:14.619879    7680 out.go:368] Setting JSON to true
	I1217 00:05:14.623350    7680 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1503,"bootTime":1765928411,"procs":189,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:05:14.623414    7680 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:05:14.628856    7680 out.go:99] [download-only-265700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:05:14.628856    7680 notify.go:221] Checking for updates...
	I1217 00:05:14.631494    7680 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:05:14.637459    7680 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:05:14.639751    7680 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:05:14.642352    7680 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 00:05:14.646514    7680 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:05:14.647100    7680 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:05:14.764820    7680 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:05:14.768767    7680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:05:15.003764    7680 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-17 00:05:14.983335583 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:05:15.006665    7680 out.go:99] Using the docker driver based on user configuration
	I1217 00:05:15.006665    7680 start.go:309] selected driver: docker
	I1217 00:05:15.006665    7680 start.go:927] validating driver "docker" against <nil>
	I1217 00:05:15.016100    7680 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:05:15.343530    7680 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-17 00:05:15.32609022 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Inde
xServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0 E
xpected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescri
ption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progra
m Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:05:15.344272    7680 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:05:15.380781    7680 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1217 00:05:15.380781    7680 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:05:15.669631    7680 out.go:171] Using Docker Desktop driver with root privileges
	
	
	* The control-plane node download-only-265700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-265700"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.2/LogsDuration (0.41s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAll (0.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.34.2/DeleteAll (0.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.74s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-265700
--- PASS: TestDownloadOnly/v1.34.2/DeleteAlwaysSucceeds (0.74s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/json-events (5.57s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-543700 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker
aaa_download_only_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-543700 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker: (5.568316s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/json-events (5.57s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/preload-exists
I1217 00:05:26.922344    4168 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
I1217 00:05:26.922344    4168 preload.go:203] Found local preload: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.35.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.35.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-543700
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-543700: exit status 85 (218.1418ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬───────────────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                           ARGS                                                                           │       PROFILE        │       USER        │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼───────────────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-684900 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=docker --driver=docker        │ download-only-684900 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │ 17 Dec 25 00:05 UTC │
	│ delete  │ -p download-only-684900                                                                                                                                  │ download-only-684900 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │ 17 Dec 25 00:05 UTC │
	│ start   │ -o=json --download-only -p download-only-265700 --force --alsologtostderr --kubernetes-version=v1.34.2 --container-runtime=docker --driver=docker        │ download-only-265700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │                     │
	│ delete  │ --all                                                                                                                                                    │ minikube             │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │ 17 Dec 25 00:05 UTC │
	│ delete  │ -p download-only-265700                                                                                                                                  │ download-only-265700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │ 17 Dec 25 00:05 UTC │
	│ start   │ -o=json --download-only -p download-only-543700 --force --alsologtostderr --kubernetes-version=v1.35.0-beta.0 --container-runtime=docker --driver=docker │ download-only-543700 │ minikube4\jenkins │ v1.37.0 │ 17 Dec 25 00:05 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴───────────────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/12/17 00:05:21
	Running on machine: minikube4
	Binary: Built with gc go1.25.5 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1217 00:05:21.426088   13360 out.go:360] Setting OutFile to fd 864 ...
	I1217 00:05:21.468396   13360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:05:21.468396   13360 out.go:374] Setting ErrFile to fd 868...
	I1217 00:05:21.468396   13360 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:05:21.484845   13360 out.go:368] Setting JSON to true
	I1217 00:05:21.487308   13360 start.go:133] hostinfo: {"hostname":"minikube4","uptime":1510,"bootTime":1765928411,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:05:21.487308   13360 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:05:21.505089   13360 out.go:99] [download-only-543700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:05:21.505089   13360 notify.go:221] Checking for updates...
	I1217 00:05:21.508126   13360 out.go:171] KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:05:21.511335   13360 out.go:171] MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:05:21.513658   13360 out.go:171] MINIKUBE_LOCATION=22168
	I1217 00:05:21.516309   13360 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W1217 00:05:21.521271   13360 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1217 00:05:21.522297   13360 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:05:21.634709   13360 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:05:21.638134   13360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:05:21.878184   13360 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-17 00:05:21.856958747 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:05:21.882885   13360 out.go:99] Using the docker driver based on user configuration
	I1217 00:05:21.882934   13360 start.go:309] selected driver: docker
	I1217 00:05:21.882997   13360 start.go:927] validating driver "docker" against <nil>
	I1217 00:05:21.889127   13360 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:05:22.151219   13360 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:0 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:52 OomKillDisable:true NGoroutines:76 SystemTime:2025-12-17 00:05:22.131004937 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:05:22.151544   13360 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1217 00:05:22.185829   13360 start_flags.go:410] Using suggested 16300MB memory alloc based on sys=65534MB, container=32098MB
	I1217 00:05:22.186832   13360 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1217 00:05:22.459011   13360 out.go:171] Using Docker Desktop driver with root privileges
	I1217 00:05:22.461914   13360 cni.go:84] Creating CNI manager for ""
	I1217 00:05:22.462064   13360 cni.go:158] "docker" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I1217 00:05:22.462064   13360 start_flags.go:336] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I1217 00:05:22.462174   13360 start.go:353] cluster config:
	{Name:download-only-543700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:16300 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:download-only-543700 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:05:22.464589   13360 out.go:99] Starting "download-only-543700" primary control-plane node in "download-only-543700" cluster
	I1217 00:05:22.464589   13360 cache.go:134] Beginning downloading kic base image for docker with docker
	I1217 00:05:22.466416   13360 out.go:99] Pulling base image v0.0.48-1765661130-22141 ...
	I1217 00:05:22.466416   13360 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:05:22.466416   13360 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local docker daemon
	I1217 00:05:22.503987   13360 preload.go:148] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:05:22.503987   13360 cache.go:65] Caching tarball of preloaded images
	I1217 00:05:22.504570   13360 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:05:22.506877   13360 out.go:99] Downloading Kubernetes v1.35.0-beta.0 preload ...
	I1217 00:05:22.506980   13360 preload.go:318] getting checksum for preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4 from gcs api...
	I1217 00:05:22.519825   13360 cache.go:163] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 to local cache
	I1217 00:05:22.519825   13360 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1217 00:05:22.520845   13360 localpath.go:148] windows sanitize: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\kic\amd64\kicbase-builds_v0.0.48-1765661130-22141@sha256_71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78.tar
	I1217 00:05:22.520845   13360 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory
	I1217 00:05:22.520845   13360 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 in local cache directory, skipping pull
	I1217 00:05:22.520845   13360 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 exists in cache, skipping pull
	I1217 00:05:22.520845   13360 cache.go:166] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 as a tarball
	I1217 00:05:22.582636   13360 preload.go:295] Got checksum from GCS API "7f0e1a4aaa3540d32279d04bf9728fae"
	I1217 00:05:22.582827   13360 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.35.0-beta.0/preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:7f0e1a4aaa3540d32279d04bf9728fae -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.35.0-beta.0-docker-overlay2-amd64.tar.lz4
	I1217 00:05:25.675391   13360 cache.go:68] Finished verifying existence of preloaded tar for v1.35.0-beta.0 on docker
	I1217 00:05:25.676063   13360 profile.go:143] Saving config to C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-543700\config.json ...
	I1217 00:05:25.676436   13360 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\download-only-543700\config.json: {Name:mk59b4adb19453565e11fbe3f876218ef19a260e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1217 00:05:25.677142   13360 preload.go:188] Checking if preload exists for k8s version v1.35.0-beta.0 and runtime docker
	I1217 00:05:25.677553   13360 download.go:108] Downloading: https://dl.k8s.io/release/v1.35.0-beta.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.35.0-beta.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v1.35.0-beta.0/kubectl.exe
	
	
	* The control-plane node download-only-543700 host does not exist
	  To start a cluster, run: "minikube start -p download-only-543700"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.35.0-beta.0/LogsDuration (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAll (1.02s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
aaa_download_only_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe delete --all: (1.0228302s)
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAll (1.02s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-543700
--- PASS: TestDownloadOnly/v1.35.0-beta.0/DeleteAlwaysSucceeds (0.46s)

                                                
                                    
x
+
TestDownloadOnlyKic (2.06s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p download-docker-094300 --alsologtostderr --driver=docker
aaa_download_only_test.go:231: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p download-docker-094300 --alsologtostderr --driver=docker: (1.0533735s)
helpers_test.go:176: Cleaning up "download-docker-094300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-docker-094300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p download-docker-094300: (1.0013979s)
--- PASS: TestDownloadOnlyKic (2.06s)

                                                
                                    
x
+
TestBinaryMirror (2.22s)

                                                
                                                
=== RUN   TestBinaryMirror
I1217 00:05:32.346029    4168 binary.go:80] Not caching binary, using https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.34.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-545100 --alsologtostderr --binary-mirror http://127.0.0.1:55385 --driver=docker
aaa_download_only_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-545100 --alsologtostderr --binary-mirror http://127.0.0.1:55385 --driver=docker: (1.3688342s)
helpers_test.go:176: Cleaning up "binary-mirror-545100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-545100
--- PASS: TestBinaryMirror (2.22s)

                                                
                                    
x
+
TestOffline (117.36s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-067200 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-067200 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker: (1m53.457341s)
helpers_test.go:176: Cleaning up "offline-docker-067200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-067200
E1217 01:38:36.819117    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-067200: (3.9029557s)
--- PASS: TestOffline (117.36s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1002: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-401400
addons_test.go:1002: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-401400: exit status 85 (190.5725ms)

                                                
                                                
-- stdout --
	* Profile "addons-401400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1013: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-401400
addons_test.go:1013: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-401400: exit status 85 (209.3071ms)

                                                
                                                
-- stdout --
	* Profile "addons-401400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-401400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/Setup (298.56s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-401400 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-401400 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m58.5627986s)
--- PASS: TestAddons/Setup (298.56s)

                                                
                                    
x
+
TestAddons/serial/Volcano (51.47s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:878: volcano-admission stabilized in 17.1773ms
addons_test.go:870: volcano-scheduler stabilized in 17.1773ms
addons_test.go:886: volcano-controller stabilized in 17.1773ms
addons_test.go:892: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-scheduler-76c996c8bf-t9xmp" [84abecec-d31d-4638-a3c4-b2ebde579684] Running
addons_test.go:892: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0064522s
addons_test.go:896: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-admission-6c447bd768-g9hzd" [222b5eaa-8624-4391-9e46-fb61a7177b98] Running
addons_test.go:896: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 6.0069325s
addons_test.go:900: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:353: "volcano-controllers-6fd4f85cb8-df99l" [e1ebceda-24cb-44bb-b95d-45d9027755e5] Running
addons_test.go:900: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0065676s
addons_test.go:905: (dbg) Run:  kubectl --context addons-401400 delete -n volcano-system job volcano-admission-init
addons_test.go:911: (dbg) Run:  kubectl --context addons-401400 create -f testdata\vcjob.yaml
addons_test.go:919: (dbg) Run:  kubectl --context addons-401400 get vcjob -n my-volcano
addons_test.go:937: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:353: "test-job-nginx-0" [432bc785-67bd-4a1e-8097-510c0b1cc457] Pending
helpers_test.go:353: "test-job-nginx-0" [432bc785-67bd-4a1e-8097-510c0b1cc457] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "test-job-nginx-0" [432bc785-67bd-4a1e-8097-510c0b1cc457] Running
addons_test.go:937: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 21.0071228s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable volcano --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable volcano --alsologtostderr -v=1: (12.5555022s)
--- PASS: TestAddons/serial/Volcano (51.47s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:632: (dbg) Run:  kubectl --context addons-401400 create ns new-namespace
addons_test.go:646: (dbg) Run:  kubectl --context addons-401400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.25s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.19s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:677: (dbg) Run:  kubectl --context addons-401400 create -f testdata\busybox.yaml
addons_test.go:684: (dbg) Run:  kubectl --context addons-401400 create sa gcp-auth-test
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [840ae0b9-96e6-4012-9296-00c1aca15823] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [840ae0b9-96e6-4012-9296-00c1aca15823] Running
addons_test.go:690: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.0058028s
addons_test.go:696: (dbg) Run:  kubectl --context addons-401400 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:708: (dbg) Run:  kubectl --context addons-401400 describe sa gcp-auth-test
addons_test.go:722: (dbg) Run:  kubectl --context addons-401400 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:746: (dbg) Run:  kubectl --context addons-401400 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.19s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (1.28s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:325: registry-creds stabilized in 43.28ms
addons_test.go:327: (dbg) Run:  out/minikube-windows-amd64.exe addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-401400
addons_test.go:334: (dbg) Run:  kubectl --context addons-401400 -n kube-system get secret -o yaml
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable registry-creds --alsologtostderr -v=1
--- PASS: TestAddons/parallel/RegistryCreds (1.28s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:353: "gadget-phkzp" [17b82c53-ad0c-44e9-a87d-ebd3c88aa03d] Running
addons_test.go:825: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005775s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable inspektor-gadget --alsologtostderr -v=1: (6.1349568s)
--- PASS: TestAddons/parallel/InspektorGadget (12.14s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:457: metrics-server stabilized in 11.0538ms
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:353: "metrics-server-85b7d694d7-kvb5r" [4d9e82af-a654-4bb7-8d7d-f98ce9a36413] Running
addons_test.go:459: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.0054967s
addons_test.go:465: (dbg) Run:  kubectl --context addons-401400 top pods -n kube-system
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable metrics-server --alsologtostderr -v=1: (1.5610189s)
--- PASS: TestAddons/parallel/MetricsServer (7.70s)

                                                
                                    
x
+
TestAddons/parallel/CSI (39.64s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1217 00:12:20.072012    4168 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1217 00:12:20.079398    4168 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1217 00:12:20.079426    4168 kapi.go:107] duration metric: took 7.4757ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:551: csi-hostpath-driver pods stabilized in 7.4757ms
addons_test.go:554: (dbg) Run:  kubectl --context addons-401400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:559: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:564: (dbg) Run:  kubectl --context addons-401400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:353: "task-pv-pod" [45c00ccc-53e6-4a25-9b4b-ac5d5bff9bd6] Pending
helpers_test.go:353: "task-pv-pod" [45c00ccc-53e6-4a25-9b4b-ac5d5bff9bd6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod" [45c00ccc-53e6-4a25-9b4b-ac5d5bff9bd6] Running
addons_test.go:569: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.0069458s
addons_test.go:574: (dbg) Run:  kubectl --context addons-401400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:428: (dbg) Run:  kubectl --context addons-401400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:428: (dbg) Run:  kubectl --context addons-401400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:584: (dbg) Run:  kubectl --context addons-401400 delete pod task-pv-pod
addons_test.go:584: (dbg) Done: kubectl --context addons-401400 delete pod task-pv-pod: (1.2294316s)
addons_test.go:590: (dbg) Run:  kubectl --context addons-401400 delete pvc hpvc
addons_test.go:596: (dbg) Run:  kubectl --context addons-401400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:601: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:606: (dbg) Run:  kubectl --context addons-401400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:353: "task-pv-pod-restore" [0dc72b52-5474-4c09-b646-dbadba8ea642] Pending
helpers_test.go:353: "task-pv-pod-restore" [0dc72b52-5474-4c09-b646-dbadba8ea642] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:353: "task-pv-pod-restore" [0dc72b52-5474-4c09-b646-dbadba8ea642] Running
addons_test.go:611: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.0058055s
addons_test.go:616: (dbg) Run:  kubectl --context addons-401400 delete pod task-pv-pod-restore
addons_test.go:616: (dbg) Done: kubectl --context addons-401400 delete pod task-pv-pod-restore: (1.301736s)
addons_test.go:620: (dbg) Run:  kubectl --context addons-401400 delete pvc hpvc-restore
addons_test.go:624: (dbg) Run:  kubectl --context addons-401400 delete volumesnapshot new-snapshot-demo
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable volumesnapshots --alsologtostderr -v=1: (1.2122642s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (7.6124038s)
--- PASS: TestAddons/parallel/CSI (39.64s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (37.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:810: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-401400 --alsologtostderr -v=1
addons_test.go:810: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-401400 --alsologtostderr -v=1: (2.0853545s)
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:353: "headlamp-dfcdc64b-xc885" [9bb5e5d7-a612-4934-b7f9-2cc5b1bece66] Pending
helpers_test.go:353: "headlamp-dfcdc64b-xc885" [9bb5e5d7-a612-4934-b7f9-2cc5b1bece66] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:353: "headlamp-dfcdc64b-xc885" [9bb5e5d7-a612-4934-b7f9-2cc5b1bece66] Running
addons_test.go:815: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 28.0065712s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable headlamp --alsologtostderr -v=1: (7.2732321s)
--- PASS: TestAddons/parallel/Headlamp (37.37s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (7.43s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:353: "cloud-spanner-emulator-5bdddb765-nndfs" [729a250f-c219-4c99-913c-ac87b0b1ef19] Running
addons_test.go:842: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.0060498s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable cloud-spanner --alsologtostderr -v=1: (1.4180707s)
--- PASS: TestAddons/parallel/CloudSpanner (7.43s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (15.18s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:951: (dbg) Run:  kubectl --context addons-401400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:957: (dbg) Run:  kubectl --context addons-401400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:961: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:403: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:353: "test-local-path" [6df4b1f0-4197-495c-9b5e-6db30fc21562] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "test-local-path" [6df4b1f0-4197-495c-9b5e-6db30fc21562] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:353: "test-local-path" [6df4b1f0-4197-495c-9b5e-6db30fc21562] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:964: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 7.0059915s
addons_test.go:969: (dbg) Run:  kubectl --context addons-401400 get pvc test-pvc -o=json
addons_test.go:978: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 ssh "cat /opt/local-path-provisioner/pvc-3f3e6b2b-dd3b-4fe1-a64b-a980c7c6f1a3_default_test-pvc/file1"
addons_test.go:990: (dbg) Run:  kubectl --context addons-401400 delete pod test-local-path
addons_test.go:994: (dbg) Run:  kubectl --context addons-401400 delete pvc test-pvc
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (15.18s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:353: "nvidia-device-plugin-daemonset-jvggj" [cb33ef92-6bac-4376-b3f4-064d60c8e272] Running
addons_test.go:1027: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0049316s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.82s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (13.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:353: "yakd-dashboard-5ff678cb9-58sv7" [b4fc5f17-48c3-4a8c-a2ec-babbd737719b] Running
addons_test.go:1049: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.0736418s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable yakd --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable yakd --alsologtostderr -v=1: (7.9389687s)
--- PASS: TestAddons/parallel/Yakd (13.01s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (6.45s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:353: "amd-gpu-device-plugin-wrqpc" [98664c34-fea8-4ed5-88ee-e04315347a31] Running
addons_test.go:1040: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.0147776s
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: (1.4327893s)
--- PASS: TestAddons/parallel/AmdGpuDevicePlugin (6.45s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.89s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-401400
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-401400: (12.0829356s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-401400
addons_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-401400
addons_test.go:187: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-401400
--- PASS: TestAddons/StoppedEnableDisable (12.89s)

                                                
                                    
x
+
TestCertOptions (57.41s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-406500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost
E1217 01:38:14.146878    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
cert_options_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-options-406500 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker --apiserver-name=localhost: (52.1131608s)
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-406500 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
I1217 01:38:58.548751    4168 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8555/tcp") 0).HostPort}}'" cert-options-406500
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-406500 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:176: Cleaning up "cert-options-406500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-406500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-406500: (4.072033s)
--- PASS: TestCertOptions (57.41s)

                                                
                                    
x
+
TestCertExpiration (285.59s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-358800 --memory=3072 --cert-expiration=3m --driver=docker
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-358800 --memory=3072 --cert-expiration=3m --driver=docker: (56.9275049s)
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-358800 --memory=3072 --cert-expiration=8760h --driver=docker
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-358800 --memory=3072 --cert-expiration=8760h --driver=docker: (37.1068774s)
helpers_test.go:176: Cleaning up "cert-expiration-358800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-358800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-358800: (11.5543316s)
--- PASS: TestCertExpiration (285.59s)

                                                
                                    
x
+
TestDockerFlags (74.75s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-110500 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-110500 --cache-images=false --memory=3072 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker: (1m9.3833007s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-110500 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-110500 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:176: Cleaning up "docker-flags-110500" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-110500
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-110500: (4.044491s)
--- PASS: TestDockerFlags (74.75s)

                                                
                                    
x
+
TestForceSystemdEnv (60.15s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-314000 --memory=3072 --alsologtostderr -v=5 --driver=docker
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-314000 --memory=3072 --alsologtostderr -v=5 --driver=docker: (53.1836966s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-314000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:176: Cleaning up "force-systemd-env-314000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-314000
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-314000: (6.2313738s)
--- PASS: TestForceSystemdEnv (60.15s)

                                                
                                    
x
+
TestErrorSpam/start (2.61s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 start --dry-run
--- PASS: TestErrorSpam/start (2.61s)

                                                
                                    
x
+
TestErrorSpam/status (2.17s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 status
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 status
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 status
--- PASS: TestErrorSpam/status (2.17s)

                                                
                                    
x
+
TestErrorSpam/pause (2.62s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 pause
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 pause: (1.175686s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 pause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 pause
--- PASS: TestErrorSpam/pause (2.62s)

                                                
                                    
x
+
TestErrorSpam/unpause (2.59s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 unpause
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 unpause
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 unpause
--- PASS: TestErrorSpam/unpause (2.59s)

                                                
                                    
x
+
TestErrorSpam/stop (19.32s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 stop: (12.0359553s)
error_spam_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 stop
error_spam_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 stop: (3.6814682s)
error_spam_test.go:172: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 stop
error_spam_test.go:172: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-365700 --log_dir C:\Users\jenkins.minikube4\AppData\Local\Temp\nospam-365700 stop: (3.6027113s)
--- PASS: TestErrorSpam/stop (19.32s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (85.57s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-045600 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker
E1217 00:15:33.685169    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:33.692002    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:33.703741    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:33.725132    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:33.766688    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:33.848726    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:34.010374    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:34.332583    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:34.974387    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:36.256043    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:38.818486    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:43.940558    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:15:54.183186    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:2239: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-045600 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker: (1m25.5660118s)
--- PASS: TestFunctional/serial/StartWithProxy (85.57s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (51.46s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1217 00:16:02.320166    4168 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
functional_test.go:674: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-045600 --alsologtostderr -v=8
E1217 00:16:14.665357    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:674: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-045600 --alsologtostderr -v=8: (51.4535726s)
functional_test.go:678: soft start took 51.4546478s for "functional-045600" cluster.
I1217 00:16:53.774219    4168 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/SoftStart (51.46s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.09s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-045600 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (10.5s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cache add registry.k8s.io/pause:3.1
E1217 00:16:55.627975    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 cache add registry.k8s.io/pause:3.1: (3.9339235s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 cache add registry.k8s.io/pause:3.3: (3.2578122s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 cache add registry.k8s.io/pause:latest: (3.3109704s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (10.50s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-045600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2493974006\001
functional_test.go:1092: (dbg) Done: docker build -t minikube-local-cache-test:functional-045600 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local2493974006\001: (1.5507117s)
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cache add minikube-local-cache-test:functional-045600
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 cache add minikube-local-cache-test:functional-045600: (2.5768386s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cache delete minikube-local-cache-test:functional-045600
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-045600
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (4.39s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.21s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.19s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.59s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.59s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (4.56s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (579.8097ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 cache reload: (2.8025939s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (4.56s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.38s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 kubectl -- --context functional-045600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.38s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.83s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out\kubectl.exe --context functional-045600 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.83s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (47.61s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-045600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:772: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-045600 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (47.6104681s)
functional_test.go:776: restart took 47.6112804s for "functional-045600" cluster.
I1217 00:18:04.774979    4168 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestFunctional/serial/ExtraConfig (47.61s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-045600 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.14s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 logs: (1.8854968s)
--- PASS: TestFunctional/serial/LogsCmd (1.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.92s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2712661135\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalserialLogsFileCmd2712661135\001\logs.txt: (1.9208922s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.92s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (5.06s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-045600 apply -f testdata\invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-045600
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-045600: exit status 115 (1.0680819s)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32551 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_service_9c977cb937a5c6299cc91c983e64e702e081bf76_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-045600 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (5.06s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 config get cpus: exit status 14 (176.9889ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 config get cpus: exit status 14 (155.0108ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-045600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-045600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (629.9271ms)

                                                
                                                
-- stdout --
	* [functional-045600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:18:33.256149    9028 out.go:360] Setting OutFile to fd 1940 ...
	I1217 00:18:33.302149    9028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:18:33.302149    9028 out.go:374] Setting ErrFile to fd 1404...
	I1217 00:18:33.302149    9028 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:18:33.315149    9028 out.go:368] Setting JSON to false
	I1217 00:18:33.318149    9028 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2301,"bootTime":1765928411,"procs":195,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:18:33.318149    9028 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:18:33.322144    9028 out.go:179] * [functional-045600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:18:33.324151    9028 notify.go:221] Checking for updates...
	I1217 00:18:33.327167    9028 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:18:33.331160    9028 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:18:33.335177    9028 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:18:33.343148    9028 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:18:33.356148    9028 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:18:33.359146    9028 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 00:18:33.360146    9028 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:18:33.474155    9028 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:18:33.477166    9028 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:18:33.713464    9028 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:18:33.693409186 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:18:33.717477    9028 out.go:179] * Using the docker driver based on existing profile
	I1217 00:18:33.719476    9028 start.go:309] selected driver: docker
	I1217 00:18:33.719476    9028 start.go:927] validating driver "docker" against &{Name:functional-045600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-045600 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:18:33.719476    9028 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:18:33.765720    9028 out.go:203] 
	W1217 00:18:33.767660    9028 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:18:33.771263    9028 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-045600 --dry-run --alsologtostderr -v=1 --driver=docker
--- PASS: TestFunctional/parallel/DryRun (1.52s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-045600 --dry-run --memory 250MB --alsologtostderr --driver=docker
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-045600 --dry-run --memory 250MB --alsologtostderr --driver=docker: exit status 23 (667.733ms)

                                                
                                                
-- stdout --
	* [functional-045600] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:18:34.786050   13772 out.go:360] Setting OutFile to fd 1848 ...
	I1217 00:18:34.844037   13772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:18:34.844037   13772 out.go:374] Setting ErrFile to fd 1984...
	I1217 00:18:34.844037   13772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:18:34.864050   13772 out.go:368] Setting JSON to false
	I1217 00:18:34.869050   13772 start.go:133] hostinfo: {"hostname":"minikube4","uptime":2303,"bootTime":1765928411,"procs":192,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:18:34.869050   13772 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:18:34.873041   13772 out.go:179] * [functional-045600] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:18:34.877037   13772 notify.go:221] Checking for updates...
	I1217 00:18:34.880040   13772 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:18:34.886034   13772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:18:34.894040   13772 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:18:34.897241   13772 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:18:34.899828   13772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:18:34.902815   13772 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 00:18:34.904365   13772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:18:35.027658   13772 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:18:35.030649   13772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:18:35.277738   13772 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:86 SystemTime:2025-12-17 00:18:35.258195758 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:18:35.284738   13772 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:18:35.288736   13772 start.go:309] selected driver: docker
	I1217 00:18:35.288736   13772 start.go:927] validating driver "docker" against &{Name:functional-045600 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.2 ClusterName:functional-045600 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[]
MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:18:35.288736   13772 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:18:35.328953   13772 out.go:203] 
	W1217 00:18:35.331936   13772 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:18:35.333937   13772 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 status
functional_test.go:875: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.96s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:353: "storage-provisioner" [7e56b27e-e57b-42b8-a90d-26d0319d93d6] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.0177549s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-045600 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-045600 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-045600 get pvc myclaim -o=json
I1217 00:18:21.743727    4168 retry.go:31] will retry after 1.305108194s: testpvc phase = "Pending", want "Bound" (msg={TypeMeta:{Kind:PersistentVolumeClaim APIVersion:v1} ObjectMeta:{Name:myclaim GenerateName: Namespace:default SelfLink: UID:1d3f36db-c760-4348-ba1b-2107a2045b51 ResourceVersion:797 Generation:0 CreationTimestamp:2025-12-17 00:18:21 +0000 UTC DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[] Annotations:map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["ReadWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] OwnerReferences:[] Finalizers:[kubernetes.io/pvc-protection] ManagedFields:[]} Spec:{AccessModes:[ReadWriteOnce] Selector:nil Resources:{Limits:map[] Requests:map[storage:{i:{value:524288000 scale:0} d:{Dec:<nil>} s:500Mi Format:BinarySI}]} VolumeName: StorageClassName:0xc0017d5ad0 VolumeMode:0xc0017d5ae0 DataSource:nil DataSourceRef:nil VolumeAttributesClassName:<nil>} Status:{Phase:Pending AccessModes:[] Capacity:map[] Conditions:[] AllocatedResources:map[] AllocatedResourceStatuses:map[] CurrentVolumeAttributesClassName:<nil> ModifyVolumeStatus:nil}})
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-045600 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-045600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [cb3e4dba-0b1e-49b5-982f-a3f02f3fa63e] Pending
helpers_test.go:353: "sp-pod" [cb3e4dba-0b1e-49b5-982f-a3f02f3fa63e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [cb3e4dba-0b1e-49b5-982f-a3f02f3fa63e] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.0067503s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-045600 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-045600 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:112: (dbg) Done: kubectl --context functional-045600 delete -f testdata/storage-provisioner/pod.yaml: (1.489483s)
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-045600 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:353: "sp-pod" [447f6c03-8949-41f7-ae45-a2905ea23cc2] Pending
helpers_test.go:353: "sp-pod" [447f6c03-8949-41f7-ae45-a2905ea23cc2] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:353: "sp-pod" [447f6c03-8949-41f7-ae45-a2905ea23cc2] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.0095509s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-045600 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (3.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh -n functional-045600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cp functional-045600:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalparallelCpCmd2737548863\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh -n functional-045600 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh -n functional-045600 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (3.16s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (71.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-045600 replace --force -f testdata\mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:353: "mysql-6bcdcbc558-2ln25" [6d5a34e4-19e8-4980-bef0-82b036716341] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:353: "mysql-6bcdcbc558-2ln25" [6d5a34e4-19e8-4980-bef0-82b036716341] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 53.0059653s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;": exit status 1 (252.8787ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:19:32.912062    4168 retry.go:31] will retry after 1.016123374s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;": exit status 1 (197.4974ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:19:34.130044    4168 retry.go:31] will retry after 1.897478247s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;": exit status 1 (207.4184ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:19:36.239653    4168 retry.go:31] will retry after 2.235106288s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;": exit status 1 (237.3989ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:19:38.717844    4168 retry.go:31] will retry after 2.229190765s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;": exit status 1 (253.6246ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:19:41.204484    4168 retry.go:31] will retry after 4.195944542s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;": exit status 1 (201.1011ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1217 00:19:45.605712    4168 retry.go:31] will retry after 4.673267434s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-045600 exec mysql-6bcdcbc558-2ln25 -- mysql -ppassword -e "show databases;"
E1217 00:20:33.686135    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 00:21:01.395405    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (71.24s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4168/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo cat /etc/test/nested/copy/4168/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (3.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4168.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo cat /etc/ssl/certs/4168.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4168.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo cat /usr/share/ca-certificates/4168.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo cat /etc/ssl/certs/41682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo cat /usr/share/ca-certificates/41682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (3.32s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-045600 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 ssh "sudo systemctl is-active crio": exit status 1 (545.4789ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (1.5384838s)
--- PASS: TestFunctional/parallel/License (1.56s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (8.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-045600 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-045600 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:353: "hello-node-75c85bcc94-kk5vp" [ff428c4f-ab79-4ed3-b903-26624ea09afe] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-75c85bcc94-kk5vp" [ff428c4f-ab79-4ed3-b903-26624ea09afe] Running
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 8.0089319s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (8.32s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "669.4563ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "187.9715ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "806.9257ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "182.5335ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-045600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-045600 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-045600 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 8788: OpenProcess: The parameter is incorrect.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-045600 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-045600 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-045600 apply -f testdata\testsvc.yaml
E1217 00:18:17.551143    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:353: "nginx-svc" [45204589-fb43-4e96-b91c-a7535401f6c3] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx-svc" [45204589-fb43-4e96-b91c-a7535401f6c3] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 15.005827s
I1217 00:18:32.721104    4168 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (15.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 version --short
--- PASS: TestFunctional/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (3.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 version -o=json --components: (3.4894798s)
--- PASS: TestFunctional/parallel/Version/components (3.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-045600 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.34.2
registry.k8s.io/kube-proxy:v1.34.2
registry.k8s.io/kube-controller-manager:v1.34.2
registry.k8s.io/kube-apiserver:v1.34.2
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.12.1
public.ecr.aws/nginx/nginx:alpine
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-045600
docker.io/kicbase/echo-server:latest
docker.io/kicbase/echo-server:functional-045600
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-045600 image ls --format short --alsologtostderr:
I1217 00:19:15.539675   12952 out.go:360] Setting OutFile to fd 1452 ...
I1217 00:19:15.581233   12952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:15.581233   12952 out.go:374] Setting ErrFile to fd 1560...
I1217 00:19:15.581233   12952 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:15.593235   12952 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:15.593235   12952 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:15.600240   12952 cli_runner.go:164] Run: docker container inspect functional-045600 --format={{.State.Status}}
I1217 00:19:15.663225   12952 ssh_runner.go:195] Run: systemctl --version
I1217 00:19:15.666877   12952 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-045600
I1217 00:19:15.725380   12952 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56218 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-045600\id_rsa Username:docker}
I1217 00:19:15.850217   12952 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-045600 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ public.ecr.aws/nginx/nginx                  │ alpine            │ a236f84b9d5d2 │ 53.7MB │
│ registry.k8s.io/kube-apiserver              │ v1.34.2           │ a5f569d49a979 │ 88MB   │
│ registry.k8s.io/kube-controller-manager     │ v1.34.2           │ 01e8bacf0f500 │ 74.9MB │
│ registry.k8s.io/coredns/coredns             │ v1.12.1           │ 52546a367cc9e │ 75MB   │
│ docker.io/kicbase/echo-server               │ functional-045600 │ 9056ab77afb8e │ 4.94MB │
│ docker.io/kicbase/echo-server               │ latest            │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ docker.io/library/minikube-local-cache-test │ functional-045600 │ 3dfbd7ce7f025 │ 30B    │
│ registry.k8s.io/kube-proxy                  │ v1.34.2           │ 8aa150647e88a │ 71.9MB │
│ registry.k8s.io/kube-scheduler              │ v1.34.2           │ 88320b5498ff2 │ 52.8MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-045600 image ls --format table --alsologtostderr:
I1217 00:19:16.452650    2384 out.go:360] Setting OutFile to fd 1520 ...
I1217 00:19:16.496629    2384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:16.496629    2384 out.go:374] Setting ErrFile to fd 1460...
I1217 00:19:16.496629    2384 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:16.509486    2384 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:16.509486    2384 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:16.516482    2384 cli_runner.go:164] Run: docker container inspect functional-045600 --format={{.State.Status}}
I1217 00:19:16.573477    2384 ssh_runner.go:195] Run: systemctl --version
I1217 00:19:16.576476    2384 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-045600
I1217 00:19:16.636470    2384 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56218 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-045600\id_rsa Username:docker}
I1217 00:19:16.806369    2384 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-045600 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"3dfbd7ce7f02518ee06e7f27cfc1795f8d9a41bd7d80f3a874e9326090bbbae3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-045600"],"size":"30"},{"id":"a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.2"],"size":"88000000"},{"id":"88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.2"],"size":"52800000"},{"id":"01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.2"],"size":"74900000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"0184c1613d92931
126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c","repoDigests":[],"repoTags":["public.ecr.aws/nginx/nginx:alpine"],"size":"53700000"},{"id":"8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.34.2"],"size":"71900000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":[],"repoTags":["registry.k8s.io/coredns
/coredns:v1.12.1"],"size":"75000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-045600","docker.io/kicbase/echo-server:latest"],"size":"4940000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-045600 image ls --format json --alsologtostderr:
I1217 00:19:16.003905    9864 out.go:360] Setting OutFile to fd 1844 ...
I1217 00:19:16.047810    9864 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:16.047810    9864 out.go:374] Setting ErrFile to fd 1792...
I1217 00:19:16.047810    9864 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:16.061031    9864 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:16.062032    9864 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:16.069998    9864 cli_runner.go:164] Run: docker container inspect functional-045600 --format={{.State.Status}}
I1217 00:19:16.127119    9864 ssh_runner.go:195] Run: systemctl --version
I1217 00:19:16.131128    9864 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-045600
I1217 00:19:16.183123    9864 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56218 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-045600\id_rsa Username:docker}
I1217 00:19:16.313746    9864 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-045600 image ls --format yaml --alsologtostderr:
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 3dfbd7ce7f02518ee06e7f27cfc1795f8d9a41bd7d80f3a874e9326090bbbae3
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-045600
size: "30"
- id: 8aa150647e88a80f2e8c7bd5beb3b7af1209fb4004a261e86b617f40849c6d45
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.34.2
size: "71900000"
- id: 01e8bacf0f50095b9b12daf485979dbcb454e08c405e42bde98e3d2198e475e8
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.2
size: "74900000"
- id: 88320b5498ff2caef2e5b089fc2c49c81d6529dcbba1481eb04badc3e40e5952
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.2
size: "52800000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: a236f84b9d5d27fe4bf2bab07501cccdc8e16bb38a41f83e245216bbd2b61b5c
repoDigests: []
repoTags:
- public.ecr.aws/nginx/nginx:alpine
size: "53700000"
- id: a5f569d49a979d9f62c742edf7a6b6ee8b3cf5855e05dacb0647445bb62ffb85
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.2
size: "88000000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "75000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-045600
- docker.io/kicbase/echo-server:latest
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-045600 image ls --format yaml --alsologtostderr:
I1217 00:19:17.015657    7780 out.go:360] Setting OutFile to fd 1676 ...
I1217 00:19:17.058466    7780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:17.058466    7780 out.go:374] Setting ErrFile to fd 1908...
I1217 00:19:17.058466    7780 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:17.070505    7780 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:17.071441    7780 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:17.079826    7780 cli_runner.go:164] Run: docker container inspect functional-045600 --format={{.State.Status}}
I1217 00:19:17.145385    7780 ssh_runner.go:195] Run: systemctl --version
I1217 00:19:17.148405    7780 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-045600
I1217 00:19:17.211513    7780 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56218 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-045600\id_rsa Username:docker}
I1217 00:19:17.394484    7780 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (10.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 ssh pgrep buildkitd: exit status 1 (573.0462ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr: (8.9872885s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-045600 image build -t localhost/my-image:functional-045600 testdata\build --alsologtostderr:
I1217 00:19:18.102752    5664 out.go:360] Setting OutFile to fd 1908 ...
I1217 00:19:18.166542    5664 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:18.166542    5664 out.go:374] Setting ErrFile to fd 1900...
I1217 00:19:18.166542    5664 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:19:18.178550    5664 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:18.202019    5664 config.go:182] Loaded profile config "functional-045600": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
I1217 00:19:18.210028    5664 cli_runner.go:164] Run: docker container inspect functional-045600 --format={{.State.Status}}
I1217 00:19:18.269604    5664 ssh_runner.go:195] Run: systemctl --version
I1217 00:19:18.272604    5664 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-045600
I1217 00:19:18.326214    5664 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56218 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-045600\id_rsa Username:docker}
I1217 00:19:18.489270    5664 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3228991849.tar
I1217 00:19:18.495404    5664 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:19:18.516184    5664 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3228991849.tar
I1217 00:19:18.527038    5664 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3228991849.tar: stat -c "%s %y" /var/lib/minikube/build/build.3228991849.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3228991849.tar': No such file or directory
I1217 00:19:18.527038    5664 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3228991849.tar --> /var/lib/minikube/build/build.3228991849.tar (3072 bytes)
I1217 00:19:18.604483    5664 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3228991849
I1217 00:19:18.624709    5664 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3228991849 -xf /var/lib/minikube/build/build.3228991849.tar
I1217 00:19:18.638493    5664 docker.go:361] Building image: /var/lib/minikube/build/build.3228991849
I1217 00:19:18.641935    5664 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-045600 /var/lib/minikube/build/build.3228991849
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.1s done
#5 DONE 0.9s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 4.6s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:68f376db1ec087cf89cba46edacfb152f0ecc6d9ea69c4137f2630673004fee8 done
#8 naming to localhost/my-image:functional-045600 0.0s done
#8 DONE 0.3s
I1217 00:19:26.949337    5664 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-045600 /var/lib/minikube/build/build.3228991849: (8.3067758s)
I1217 00:19:26.954334    5664 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3228991849
I1217 00:19:26.972203    5664 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3228991849.tar
I1217 00:19:26.986476    5664 build_images.go:218] Built localhost/my-image:functional-045600 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3228991849.tar
I1217 00:19:26.986744    5664 build_images.go:134] succeeded building to: functional-045600
I1217 00:19:26.986784    5664 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (10.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:357: (dbg) Done: docker pull kicbase/echo-server:1.0: (1.6737251s)
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-045600
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.80s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image load --daemon kicbase/echo-server:functional-045600 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 image load --daemon kicbase/echo-server:functional-045600 --alsologtostderr: (3.7979889s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 service list -o json
functional_test.go:1504: Took "807.8293ms" to run "out/minikube-windows-amd64.exe -p functional-045600 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.81s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 service --namespace=default --https --url hello-node: exit status 1 (15.0162999s)

                                                
                                                
-- stdout --
	https://127.0.0.1:56463

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1532: found endpoint: https://127.0.0.1:56463
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image load --daemon kicbase/echo-server:functional-045600 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 image load --daemon kicbase/echo-server:functional-045600 --alsologtostderr: (2.4047319s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-045600
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image load --daemon kicbase/echo-server:functional-045600 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-045600 image load --daemon kicbase/echo-server:functional-045600 --alsologtostderr: (2.3604255s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (3.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image save kicbase/echo-server:functional-045600 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image rm kicbase/echo-server:functional-045600 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-045600 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-045600 tunnel --alsologtostderr] ...
helpers_test.go:526: unable to kill pid 5368: OpenProcess: The parameter is incorrect.
helpers_test.go:526: unable to kill pid 4856: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.13s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-045600
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 image save --daemon kicbase/echo-server:functional-045600 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-045600
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (5.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:514: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-045600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-045600"
functional_test.go:514: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-045600 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-045600": (3.136281s)
functional_test.go:537: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-045600 docker-env | Invoke-Expression ; docker images"
functional_test.go:537: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-045600 docker-env | Invoke-Expression ; docker images": (2.2247305s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (5.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 service hello-node --url --format={{.IP}}: exit status 1 (15.030039s)

                                                
                                                
-- stdout --
	127.0.0.1

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ServiceCmd/Format (15.03s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-045600 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-045600 service hello-node --url: exit status 1 (15.0125054s)

                                                
                                                
-- stdout --
	http://127.0.0.1:56548

                                                
                                                
-- /stdout --
** stderr ** 
	! Because you are using a Docker driver on windows, the terminal needs to be open to run it.

                                                
                                                
** /stderr **
functional_test.go:1575: found endpoint for hello-node: http://127.0.0.1:56548
--- PASS: TestFunctional/parallel/ServiceCmd/URL (15.01s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.15s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-045600
--- PASS: TestFunctional/delete_echo-server_images (0.15s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-045600
--- PASS: TestFunctional/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-045600
--- PASS: TestFunctional/delete_minikube_cached_images (0.07s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile
functional_test.go:1860: local sync path: C:\Users\jenkins.minikube4\minikube-integration\.minikube\files\etc\test\nested\copy\4168\hosts
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.1s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/KubeContext (0.10s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (10.21s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 cache add registry.k8s.io/pause:3.1: (3.6664032s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 cache add registry.k8s.io/pause:3.3: (3.2184625s)
functional_test.go:1064: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cache add registry.k8s.io/pause:latest
functional_test.go:1064: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 cache add registry.k8s.io/pause:latest: (3.3219279s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_remote (10.21s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-409700 C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialCach896528946\001
functional_test.go:1104: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cache add minikube-local-cache-test:functional-409700
functional_test.go:1104: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 cache add minikube-local-cache-test:functional-409700: (2.6028773s)
functional_test.go:1109: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cache delete minikube-local-cache-test:functional-409700
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-409700
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/add_local (3.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/CacheDelete (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/list (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh sudo crictl images
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/verify_cache_inside_node (0.58s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.57s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (587.9156ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cache reload
functional_test.go:1173: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 cache reload: (2.8781555s)
functional_test.go:1178: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/cache_reload (4.57s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/CacheCmd/cache/delete (0.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.25s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs
functional_test.go:1251: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs: (1.2482188s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsCmd (1.25s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.38s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs958405350\001\logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 logs --file C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0serialLogs958405350\001\logs.txt: (1.3765455s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/serial/LogsFileCmd (1.38s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.09s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 config get cpus: exit status 14 (161.9996ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 config get cpus: exit status 14 (148.0078ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ConfigCmd (1.09s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.61s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:989: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (714.922ms)

                                                
                                                
-- stdout --
	* [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:57:28.603215   10992 out.go:360] Setting OutFile to fd 1240 ...
	I1217 00:57:28.655844   10992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:28.655844   10992 out.go:374] Setting ErrFile to fd 948...
	I1217 00:57:28.655844   10992 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:28.671070   10992 out.go:368] Setting JSON to false
	I1217 00:57:28.673422   10992 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4637,"bootTime":1765928411,"procs":191,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:57:28.673542   10992 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:57:28.677113   10992 out.go:179] * [functional-409700] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:57:28.679186   10992 notify.go:221] Checking for updates...
	I1217 00:57:28.681699   10992 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:57:28.683279   10992 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:57:28.686105   10992 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:57:28.688654   10992 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:57:28.691326   10992 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:57:28.694404   10992 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:57:28.695005   10992 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:57:28.894684   10992 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:57:28.898138   10992 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:29.134864   10992 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:66 OomKillDisable:true NGoroutines:85 SystemTime:2025-12-17 00:57:29.114604622 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:29.140514   10992 out.go:179] * Using the docker driver based on existing profile
	I1217 00:57:29.147731   10992 start.go:309] selected driver: docker
	I1217 00:57:29.147731   10992 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:29.147731   10992 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:57:29.197640   10992 out.go:203] 
	W1217 00:57:29.200295   10992 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1217 00:57:29.205408   10992 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-409700 --dry-run --alsologtostderr -v=1 --driver=docker --kubernetes-version=v1.35.0-beta.0
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DryRun (1.61s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.69s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-409700 --dry-run --memory 250MB --alsologtostderr --driver=docker --kubernetes-version=v1.35.0-beta.0: exit status 23 (686.9725ms)

                                                
                                                
-- stdout --
	* [functional-409700] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 00:57:29.057945   10540 out.go:360] Setting OutFile to fd 664 ...
	I1217 00:57:29.105778   10540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:29.105778   10540 out.go:374] Setting ErrFile to fd 1048...
	I1217 00:57:29.105828   10540 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 00:57:29.133384   10540 out.go:368] Setting JSON to false
	I1217 00:57:29.136753   10540 start.go:133] hostinfo: {"hostname":"minikube4","uptime":4637,"bootTime":1765928411,"procs":190,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.6575 Build 19045.6575","kernelVersion":"10.0.19045.6575 Build 19045.6575","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"df6bfb5f-73f2-4acb-9365-df7854ecbb28"}
	W1217 00:57:29.136863   10540 start.go:141] gopshost.Virtualization returned error: not implemented yet
	I1217 00:57:29.142871   10540 out.go:179] * [functional-409700] minikube v1.37.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	I1217 00:57:29.149363   10540 notify.go:221] Checking for updates...
	I1217 00:57:29.152138   10540 out.go:179]   - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	I1217 00:57:29.155339   10540 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1217 00:57:29.158121   10540 out.go:179]   - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	I1217 00:57:29.160374   10540 out.go:179]   - MINIKUBE_LOCATION=22168
	I1217 00:57:29.162791   10540 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1217 00:57:29.165357   10540 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
	I1217 00:57:29.165357   10540 driver.go:422] Setting default libvirt URI to qemu:///system
	I1217 00:57:29.278361   10540 docker.go:124] docker version: linux-27.4.0:Docker Desktop 4.37.1 (178610)
	I1217 00:57:29.282363   10540 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1217 00:57:29.529841   10540 info.go:266] docker info: {ID:a15b78d1-f772-48f7-bbf5-f8fe086f3f87 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:90 SystemTime:2025-12-17 00:57:29.506866483 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:14 KernelVersion:5.15.153.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 Ind
exServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33657536512 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[com.docker.desktop.address=npipe://\\.\pipe\docker_cli] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:472731909fa34bd7bc9c087e4c27943f9835f111 Expected:472731909fa34bd7bc9c087e4c27943f9835f111} RuncCommit:{ID:v1.1.13-0-g58aa920 Expected:v1.1.13-0-g58aa920} InitCommit:{ID:de40ad0
Expected:de40ad0} SecurityOptions:[name=seccomp,profile=unconfined] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support WARNING: daemon is not using the default seccomp profile] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:ai Path:C:\Program Files\Docker\cli-plugins\docker-ai.exe SchemaVersion:0.1.0 ShortDescription:Ask Gordon - Docker Agent Vendor:Docker Inc. Version:v0.5.1] map[Name:buildx Path:C:\Program Files\Docker\cli-plugins\docker-buildx.exe SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2-desktop.1] map[Name:compose Path:C:\Program Files\Docker\cli-plugins\docker-compose.exe SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0-desktop.2] map[Name:debug Path:C:\Program Files\Docker\cli-plugins\docker-debug.exe SchemaVersion:0.1.0 ShortDescr
iption:Get a shell into any image or container Vendor:Docker Inc. Version:0.0.37] map[Name:desktop Path:C:\Program Files\Docker\cli-plugins\docker-desktop.exe SchemaVersion:0.1.0 ShortDescription:Docker Desktop commands (Beta) Vendor:Docker Inc. Version:v0.1.0] map[Name:dev Path:C:\Program Files\Docker\cli-plugins\docker-dev.exe SchemaVersion:0.1.0 ShortDescription:Docker Dev Environments Vendor:Docker Inc. Version:v0.1.2] map[Name:extension Path:C:\Program Files\Docker\cli-plugins\docker-extension.exe SchemaVersion:0.1.0 ShortDescription:Manages Docker extensions Vendor:Docker Inc. Version:v0.2.27] map[Name:feedback Path:C:\Program Files\Docker\cli-plugins\docker-feedback.exe SchemaVersion:0.1.0 ShortDescription:Provide feedback, right in your terminal! Vendor:Docker Inc. Version:v1.0.5] map[Name:init Path:C:\Program Files\Docker\cli-plugins\docker-init.exe SchemaVersion:0.1.0 ShortDescription:Creates Docker-related starter files for your project Vendor:Docker Inc. Version:v1.4.0] map[Name:sbom Path:C:\Progr
am Files\Docker\cli-plugins\docker-sbom.exe SchemaVersion:0.1.0 ShortDescription:View the packaged-based Software Bill Of Materials (SBOM) for an image URL:https://github.com/docker/sbom-cli-plugin Vendor:Anchore Inc. Version:0.6.0] map[Name:scout Path:C:\Program Files\Docker\cli-plugins\docker-scout.exe SchemaVersion:0.1.0 ShortDescription:Docker Scout Vendor:Docker Inc. Version:v1.15.1]] Warnings:<nil>}}
	I1217 00:57:29.532834   10540 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1217 00:57:29.535840   10540 start.go:309] selected driver: docker
	I1217 00:57:29.535840   10540 start.go:927] validating driver "docker" against &{Name:functional-409700 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1765661130-22141@sha256:71e28c3ba83563df15de2abc511e112c2c57545086c1b12459c4142b1e28ee78 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.35.0-beta.0 ClusterName:functional-409700 Namespace:default APIServerHAVIP: APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.35.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:do
cker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1217 00:57:29.535840   10540 start.go:938] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1217 00:57:29.589847   10540 out.go:203] 
	W1217 00:57:29.591839   10540 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1217 00:57:29.593846   10540 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/InternationalLanguage (0.69s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.4s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 addons list -o json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/AddonsCmd (0.40s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.13s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "cat /etc/hostname"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/SSHCmd (1.13s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.11s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh -n functional-409700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cp functional-409700:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestFunctionalNewestKubernetesVersionv1.35.0-beta.0parallelCp2573441544\001\cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh -n functional-409700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh -n functional-409700 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CpCmd (3.11s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.54s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/4168/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo cat /etc/test/nested/copy/4168/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/FileSync (0.54s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.19s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/4168.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo cat /etc/ssl/certs/4168.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/4168.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo cat /usr/share/ca-certificates/4168.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/41682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo cat /etc/ssl/certs/41682.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/41682.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo cat /usr/share/ca-certificates/41682.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/CertSync (3.19s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo systemctl is-active crio"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 ssh "sudo systemctl is-active crio": exit status 1 (525.0842ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.28s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2293: (dbg) Done: out/minikube-windows-amd64.exe license: (2.2693511s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/License (2.28s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr]
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-409700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:437: failed to stop process: exit status 103
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DeleteTunnel (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_changes (0.29s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.34s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_minikube_cluster (0.34s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 update-context --alsologtostderr -v=2
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/UpdateContextCmd/no_clusters (0.31s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.88s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_not_create (0.88s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.82s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1330: Took "667.092ms" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1344: Took "154.0319ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_list (0.82s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1381: Took "637.1506ms" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1394: Took "157.5243ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ProfileCmd/profile_json_output (0.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 version --short
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/short (0.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (1.8s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 version -o=json --components
functional_test.go:2275: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 version -o=json --components: (1.7955259s)
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/Version/components (1.80s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls --format short --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-409700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10.1
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.35.0-beta.0
registry.k8s.io/kube-proxy:v1.35.0-beta.0
registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
registry.k8s.io/kube-apiserver:v1.35.0-beta.0
registry.k8s.io/etcd:3.6.5-0
registry.k8s.io/coredns/coredns:v1.13.1
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-409700
docker.io/kicbase/echo-server:functional-409700
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-409700 image ls --format short --alsologtostderr:
I1217 00:57:30.849394   13640 out.go:360] Setting OutFile to fd 1980 ...
I1217 00:57:30.892401   13640 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:30.892401   13640 out.go:374] Setting ErrFile to fd 784...
I1217 00:57:30.892401   13640 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:30.902403   13640 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:30.903400   13640 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:30.911400   13640 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
I1217 00:57:30.967960   13640 ssh_runner.go:195] Run: systemctl --version
I1217 00:57:30.971054   13640 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
I1217 00:57:31.021283   13640 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
I1217 00:57:31.139869   13640 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListShort (0.43s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-409700 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────────┬───────────────────┬───────────────┬────────┐
│                    IMAGE                    │        TAG        │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────────┼───────────────────┼───────────────┼────────┤
│ gcr.io/k8s-minikube/storage-provisioner     │ v5                │ 6e38f40d628db │ 31.5MB │
│ registry.k8s.io/pause                       │ 3.3               │ 0184c1613d929 │ 683kB  │
│ registry.k8s.io/pause                       │ latest            │ 350b164e7ae1d │ 240kB  │
│ docker.io/library/minikube-local-cache-test │ functional-409700 │ 3dfbd7ce7f025 │ 30B    │
│ registry.k8s.io/kube-apiserver              │ v1.35.0-beta.0    │ aa9d02839d8de │ 89.7MB │
│ registry.k8s.io/kube-proxy                  │ v1.35.0-beta.0    │ 8a4ded35a3eb1 │ 70.7MB │
│ registry.k8s.io/kube-controller-manager     │ v1.35.0-beta.0    │ 45f3cc72d235f │ 75.8MB │
│ registry.k8s.io/etcd                        │ 3.6.5-0           │ a3e246e9556e9 │ 62.5MB │
│ registry.k8s.io/pause                       │ 3.10.1            │ cd073f4c5f6a8 │ 736kB  │
│ docker.io/kicbase/echo-server               │ functional-409700 │ 9056ab77afb8e │ 4.94MB │
│ registry.k8s.io/pause                       │ 3.1               │ da86e6ba6ca19 │ 742kB  │
│ registry.k8s.io/kube-scheduler              │ v1.35.0-beta.0    │ 7bb6219ddab95 │ 51.7MB │
│ registry.k8s.io/coredns/coredns             │ v1.13.1           │ aa5e3ebc0dfed │ 78.1MB │
└─────────────────────────────────────────────┴───────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-409700 image ls --format table --alsologtostderr:
I1217 00:57:32.986665    6744 out.go:360] Setting OutFile to fd 1324 ...
I1217 00:57:33.032654    6744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:33.032654    6744 out.go:374] Setting ErrFile to fd 1980...
I1217 00:57:33.032654    6744 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:33.044661    6744 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:33.044661    6744 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:33.051660    6744 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
I1217 00:57:33.108659    6744 ssh_runner.go:195] Run: systemctl --version
I1217 00:57:33.111656    6744 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
I1217 00:57:33.168059    6744 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
I1217 00:57:33.290216    6744 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListTable (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-409700 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.35.0-beta.0"],"size":"51700000"},{"id":"a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.6.5-0"],"size":"62500000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-409700"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0
e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"3dfbd7ce7f02518ee06e7f27cfc1795f8d9a41bd7d80f3a874e9326090bbbae3","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-409700"],"size":"30"},{"id":"aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.35.0-beta.0"],"size":"89700000"},{"id":"45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.35.0-beta.0"],"size":"75800000"},{"id":"8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.35.0-beta.0"],"size":"70700000"},{"id":"aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.13.1"],"size":"78100000"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f"
,"repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"736000"}]
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-409700 image ls --format json --alsologtostderr:
I1217 00:57:32.541436   11108 out.go:360] Setting OutFile to fd 1676 ...
I1217 00:57:32.587433   11108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:32.587433   11108 out.go:374] Setting ErrFile to fd 1412...
I1217 00:57:32.587433   11108 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:32.599439   11108 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:32.599439   11108 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:32.606430   11108 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
I1217 00:57:32.663433   11108 ssh_runner.go:195] Run: systemctl --version
I1217 00:57:32.667434   11108 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
I1217 00:57:32.719438   11108 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
I1217 00:57:32.847434   11108 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListJson (0.46s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-409700 image ls --format yaml --alsologtostderr:
- id: 3dfbd7ce7f02518ee06e7f27cfc1795f8d9a41bd7d80f3a874e9326090bbbae3
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-409700
size: "30"
- id: aa9d02839d8def718798bd410c88aba69248b26a8f0e3af2c728b512b67cb52b
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.35.0-beta.0
size: "89700000"
- id: 8a4ded35a3eb1a80eb49c1a887194460a56b413eed7eb69e59605daf4ec23810
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.35.0-beta.0
size: "70700000"
- id: a3e246e9556e93d71e2850085ba581b376c76a9187b4b8a01c120f86579ef2b1
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.6.5-0
size: "62500000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-409700
size: "4940000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 7bb6219ddab95bdabbef83f051bee4fdd14b6f791aaa3121080cb2c58ada2e46
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.35.0-beta.0
size: "51700000"
- id: 45f3cc72d235f1cfda3de70fe9b2b9d3b356091e491b915f9efd6f0d6e5253bc
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.35.0-beta.0
size: "75800000"
- id: aa5e3ebc0dfed0566805186b9e47110d8f9122291d8bad1497e78873ad291139
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.13.1
size: "78100000"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10.1
size: "736000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-409700 image ls --format yaml --alsologtostderr:
I1217 00:57:31.278658     184 out.go:360] Setting OutFile to fd 1560 ...
I1217 00:57:31.324996     184 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:31.324996     184 out.go:374] Setting ErrFile to fd 1536...
I1217 00:57:31.324996     184 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:31.337216     184 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:31.337965     184 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:31.344619     184 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
I1217 00:57:31.407480     184 ssh_runner.go:195] Run: systemctl --version
I1217 00:57:31.410301     184 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
I1217 00:57:31.465233     184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
I1217 00:57:31.591890     184 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageListYaml (0.45s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.37s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-409700 ssh pgrep buildkitd: exit status 1 (522.0699ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image build -t localhost/my-image:functional-409700 testdata\build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 image build -t localhost/my-image:functional-409700 testdata\build --alsologtostderr: (4.40719s)
functional_test.go:338: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-409700 image build -t localhost/my-image:functional-409700 testdata\build --alsologtostderr:
I1217 00:57:32.259431   14208 out.go:360] Setting OutFile to fd 1560 ...
I1217 00:57:32.308430   14208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:32.308430   14208 out.go:374] Setting ErrFile to fd 1536...
I1217 00:57:32.308430   14208 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1217 00:57:32.321442   14208 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:32.324428   14208 config.go:182] Loaded profile config "functional-409700": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.35.0-beta.0
I1217 00:57:32.331430   14208 cli_runner.go:164] Run: docker container inspect functional-409700 --format={{.State.Status}}
I1217 00:57:32.384437   14208 ssh_runner.go:195] Run: systemctl --version
I1217 00:57:32.388432   14208 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-409700
I1217 00:57:32.437430   14208 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:56623 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\functional-409700\id_rsa Username:docker}
I1217 00:57:32.553436   14208 build_images.go:162] Building image from path: C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3847394621.tar
I1217 00:57:32.557436   14208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1217 00:57:32.577438   14208 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3847394621.tar
I1217 00:57:32.585447   14208 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3847394621.tar: stat -c "%s %y" /var/lib/minikube/build/build.3847394621.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3847394621.tar': No such file or directory
I1217 00:57:32.585447   14208 ssh_runner.go:362] scp C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3847394621.tar --> /var/lib/minikube/build/build.3847394621.tar (3072 bytes)
I1217 00:57:32.624438   14208 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3847394621
I1217 00:57:32.642432   14208 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3847394621 -xf /var/lib/minikube/build/build.3847394621.tar
I1217 00:57:32.660441   14208 docker.go:361] Building image: /var/lib/minikube/build/build.3847394621
I1217 00:57:32.664433   14208 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-409700 /var/lib/minikube/build/build.3847394621
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile:
#1 transferring dockerfile: 97B 0.0s done
#1 DONE 0.2s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 1.1s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.1s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.0s done
#5 DONE 0.7s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.5s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:c44f91baf1d51a0e6096484a05744a3cba3ca896ab1961308a8891a44017a162 done
#8 naming to localhost/my-image:functional-409700 0.0s done
#8 DONE 0.2s
I1217 00:57:36.521742   14208 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-409700 /var/lib/minikube/build/build.3847394621: (3.8572406s)
I1217 00:57:36.525376   14208 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3847394621
I1217 00:57:36.543588   14208 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3847394621.tar
I1217 00:57:36.556365   14208 build_images.go:218] Built localhost/my-image:functional-409700 from C:\Users\jenkins.minikube4\AppData\Local\Temp\build.3847394621.tar
I1217 00:57:36.556365   14208 build_images.go:134] succeeded building to: functional-409700
I1217 00:57:36.556365   14208 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageBuild (5.37s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-409700
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/Setup (0.85s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.3s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr
functional_test.go:370: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr: (2.8457983s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadDaemon (3.30s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.78s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr: (2.3209869s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageReloadDaemon (2.78s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.52s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-409700
functional_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr
functional_test.go:260: (dbg) Done: out/minikube-windows-amd64.exe -p functional-409700 image load --daemon kicbase/echo-server:functional-409700 --alsologtostderr: (2.3442003s)
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageTagAndLoadDaemon (3.52s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image save kicbase/echo-server:functional-409700 C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveToFile (0.67s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.9s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image rm kicbase/echo-server:functional-409700 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageRemove (0.90s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image load C:\jenkins\workspace\Docker_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image ls
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageLoadFromFile (1.18s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-409700
functional_test.go:439: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-409700 image save --daemon kicbase/echo-server:functional-409700 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect kicbase/echo-server:functional-409700
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/ImageCommands/ImageSaveDaemon (0.86s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-409700
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_echo-server_images (0.14s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-409700
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_my-image_image (0.06s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-409700
--- PASS: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (241.54s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker
E1217 01:00:22.345140    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:22.351838    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:22.363651    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:22.385846    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:22.428104    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:22.509723    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:22.671536    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:22.994101    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:23.636256    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:24.918523    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:27.480510    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:32.602389    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:33.709705    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:00:42.844627    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:01:03.327707    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:01:17.189752    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:01:44.290632    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:03:06.213760    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:03:14.119359    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker: (3m59.9411624s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: (1.5953947s)
--- PASS: TestMultiControlPlane/serial/StartCluster (241.54s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (10.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 kubectl -- rollout status deployment/busybox: (4.3608178s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-7w9km -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-g7pfq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-g7pfq -- nslookup kubernetes.io: (1.2796533s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-lfntk -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-7w9km -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-g7pfq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-lfntk -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-7w9km -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-g7pfq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-lfntk -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (10.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (2.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-7w9km -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-7w9km -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-g7pfq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-g7pfq -- sh -c "ping -c 1 192.168.65.254"
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-lfntk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 kubectl -- exec busybox-7b57f96db7-lfntk -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (2.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (55.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 node add --alsologtostderr -v 5: (53.8312748s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: (1.9067038s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (55.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-518100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (2s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0009845s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (2.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (33.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --output json --alsologtostderr -v 5
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 status --output json --alsologtostderr -v 5: (1.918399s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp testdata\cp-test.txt ha-518100:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2918997432\001\cp-test_ha-518100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100:/home/docker/cp-test.txt ha-518100-m02:/home/docker/cp-test_ha-518100_ha-518100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test_ha-518100_ha-518100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100:/home/docker/cp-test.txt ha-518100-m03:/home/docker/cp-test_ha-518100_ha-518100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test_ha-518100_ha-518100-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100:/home/docker/cp-test.txt ha-518100-m04:/home/docker/cp-test_ha-518100_ha-518100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test_ha-518100_ha-518100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp testdata\cp-test.txt ha-518100-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2918997432\001\cp-test_ha-518100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m02:/home/docker/cp-test.txt ha-518100:/home/docker/cp-test_ha-518100-m02_ha-518100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test_ha-518100-m02_ha-518100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m02:/home/docker/cp-test.txt ha-518100-m03:/home/docker/cp-test_ha-518100-m02_ha-518100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test_ha-518100-m02_ha-518100-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m02:/home/docker/cp-test.txt ha-518100-m04:/home/docker/cp-test_ha-518100-m02_ha-518100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test_ha-518100-m02_ha-518100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp testdata\cp-test.txt ha-518100-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2918997432\001\cp-test_ha-518100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m03:/home/docker/cp-test.txt ha-518100:/home/docker/cp-test_ha-518100-m03_ha-518100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test_ha-518100-m03_ha-518100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m03:/home/docker/cp-test.txt ha-518100-m02:/home/docker/cp-test_ha-518100-m03_ha-518100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test_ha-518100-m03_ha-518100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m03:/home/docker/cp-test.txt ha-518100-m04:/home/docker/cp-test_ha-518100-m03_ha-518100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test_ha-518100-m03_ha-518100-m04.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp testdata\cp-test.txt ha-518100-m04:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile2918997432\001\cp-test_ha-518100-m04.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m04:/home/docker/cp-test.txt ha-518100:/home/docker/cp-test_ha-518100-m04_ha-518100.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100 "sudo cat /home/docker/cp-test_ha-518100-m04_ha-518100.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m04:/home/docker/cp-test.txt ha-518100-m02:/home/docker/cp-test_ha-518100-m04_ha-518100-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m02 "sudo cat /home/docker/cp-test_ha-518100-m04_ha-518100-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 cp ha-518100-m04:/home/docker/cp-test.txt ha-518100-m03:/home/docker/cp-test_ha-518100-m04_ha-518100-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 ssh -n ha-518100-m03 "sudo cat /home/docker/cp-test_ha-518100-m04_ha-518100-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (33.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (13.51s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 node stop m02 --alsologtostderr -v 5
E1217 01:05:16.788937    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:05:22.349121    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 node stop m02 --alsologtostderr -v 5: (12.0174623s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: exit status 7 (1.4860997s)

                                                
                                                
-- stdout --
	ha-518100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-518100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-518100-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-518100-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:05:27.919106   14096 out.go:360] Setting OutFile to fd 1944 ...
	I1217 01:05:27.961896   14096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:05:27.961896   14096 out.go:374] Setting ErrFile to fd 1692...
	I1217 01:05:27.961896   14096 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:05:27.972434   14096 out.go:368] Setting JSON to false
	I1217 01:05:27.972434   14096 mustload.go:66] Loading cluster: ha-518100
	I1217 01:05:27.972434   14096 notify.go:221] Checking for updates...
	I1217 01:05:27.973012   14096 config.go:182] Loaded profile config "ha-518100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:05:27.973012   14096 status.go:174] checking status of ha-518100 ...
	I1217 01:05:27.980368   14096 cli_runner.go:164] Run: docker container inspect ha-518100 --format={{.State.Status}}
	I1217 01:05:28.037679   14096 status.go:371] ha-518100 host status = "Running" (err=<nil>)
	I1217 01:05:28.037734   14096 host.go:66] Checking if "ha-518100" exists ...
	I1217 01:05:28.041470   14096 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-518100
	I1217 01:05:28.098212   14096 host.go:66] Checking if "ha-518100" exists ...
	I1217 01:05:28.103218   14096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:05:28.106213   14096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-518100
	I1217 01:05:28.158225   14096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58310 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-518100\id_rsa Username:docker}
	I1217 01:05:28.276285   14096 ssh_runner.go:195] Run: systemctl --version
	I1217 01:05:28.289583   14096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:05:28.313531   14096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-518100
	I1217 01:05:28.367873   14096 kubeconfig.go:125] found "ha-518100" server: "https://127.0.0.1:58314"
	I1217 01:05:28.367873   14096 api_server.go:166] Checking apiserver status ...
	I1217 01:05:28.372659   14096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:05:28.395984   14096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2280/cgroup
	I1217 01:05:28.408628   14096 api_server.go:182] apiserver freezer: "7:freezer:/docker/9dbbe9244133ce6aba98863cbddc6fb71fc47136a1e8dba5781b247ac5a71ef0/kubepods/burstable/pod8dab8f676950d4d8741d1c77caa3d8d1/9633bfae44a2024db8562c5fc8b80286ec002c19a5bcc7798874c2cabc108b40"
	I1217 01:05:28.414134   14096 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/9dbbe9244133ce6aba98863cbddc6fb71fc47136a1e8dba5781b247ac5a71ef0/kubepods/burstable/pod8dab8f676950d4d8741d1c77caa3d8d1/9633bfae44a2024db8562c5fc8b80286ec002c19a5bcc7798874c2cabc108b40/freezer.state
	I1217 01:05:28.427648   14096 api_server.go:204] freezer state: "THAWED"
	I1217 01:05:28.427648   14096 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58314/healthz ...
	I1217 01:05:28.440597   14096 api_server.go:279] https://127.0.0.1:58314/healthz returned 200:
	ok
	I1217 01:05:28.440597   14096 status.go:463] ha-518100 apiserver status = Running (err=<nil>)
	I1217 01:05:28.441133   14096 status.go:176] ha-518100 status: &{Name:ha-518100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:05:28.441133   14096 status.go:174] checking status of ha-518100-m02 ...
	I1217 01:05:28.448417   14096 cli_runner.go:164] Run: docker container inspect ha-518100-m02 --format={{.State.Status}}
	I1217 01:05:28.502548   14096 status.go:371] ha-518100-m02 host status = "Stopped" (err=<nil>)
	I1217 01:05:28.502591   14096 status.go:384] host is not running, skipping remaining checks
	I1217 01:05:28.502591   14096 status.go:176] ha-518100-m02 status: &{Name:ha-518100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:05:28.502640   14096 status.go:174] checking status of ha-518100-m03 ...
	I1217 01:05:28.509863   14096 cli_runner.go:164] Run: docker container inspect ha-518100-m03 --format={{.State.Status}}
	I1217 01:05:28.565442   14096 status.go:371] ha-518100-m03 host status = "Running" (err=<nil>)
	I1217 01:05:28.565442   14096 host.go:66] Checking if "ha-518100-m03" exists ...
	I1217 01:05:28.570258   14096 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-518100-m03
	I1217 01:05:28.630876   14096 host.go:66] Checking if "ha-518100-m03" exists ...
	I1217 01:05:28.635746   14096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:05:28.638880   14096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-518100-m03
	I1217 01:05:28.695416   14096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58433 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-518100-m03\id_rsa Username:docker}
	I1217 01:05:28.825505   14096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:05:28.847418   14096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" ha-518100
	I1217 01:05:28.900590   14096 kubeconfig.go:125] found "ha-518100" server: "https://127.0.0.1:58314"
	I1217 01:05:28.900590   14096 api_server.go:166] Checking apiserver status ...
	I1217 01:05:28.905000   14096 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:05:28.929554   14096 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2198/cgroup
	I1217 01:05:28.943609   14096 api_server.go:182] apiserver freezer: "7:freezer:/docker/39cdbcddf48d5e7659033e313bac0a7ff12cb728c34a5112ac93f129dd58f91a/kubepods/burstable/pod85d59637bbe8d2bf4ad26d414e388966/76ebc649770e7db69cf159fd7b88fd2572e0e383098ca4e754666221d69ca3a0"
	I1217 01:05:28.948089   14096 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/39cdbcddf48d5e7659033e313bac0a7ff12cb728c34a5112ac93f129dd58f91a/kubepods/burstable/pod85d59637bbe8d2bf4ad26d414e388966/76ebc649770e7db69cf159fd7b88fd2572e0e383098ca4e754666221d69ca3a0/freezer.state
	I1217 01:05:28.961367   14096 api_server.go:204] freezer state: "THAWED"
	I1217 01:05:28.961445   14096 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:58314/healthz ...
	I1217 01:05:28.969505   14096 api_server.go:279] https://127.0.0.1:58314/healthz returned 200:
	ok
	I1217 01:05:28.969505   14096 status.go:463] ha-518100-m03 apiserver status = Running (err=<nil>)
	I1217 01:05:28.969505   14096 status.go:176] ha-518100-m03 status: &{Name:ha-518100-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:05:28.969505   14096 status.go:174] checking status of ha-518100-m04 ...
	I1217 01:05:28.977304   14096 cli_runner.go:164] Run: docker container inspect ha-518100-m04 --format={{.State.Status}}
	I1217 01:05:29.030724   14096 status.go:371] ha-518100-m04 host status = "Running" (err=<nil>)
	I1217 01:05:29.030724   14096 host.go:66] Checking if "ha-518100-m04" exists ...
	I1217 01:05:29.036003   14096 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-518100-m04
	I1217 01:05:29.089247   14096 host.go:66] Checking if "ha-518100-m04" exists ...
	I1217 01:05:29.096644   14096 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:05:29.100295   14096 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-518100-m04
	I1217 01:05:29.155826   14096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:58566 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\ha-518100-m04\id_rsa Username:docker}
	I1217 01:05:29.287195   14096 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:05:29.308563   14096 status.go:176] ha-518100-m04 status: &{Name:ha-518100-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (13.51s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.5561882s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (1.56s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (98.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 node start m02 --alsologtostderr -v 5
E1217 01:05:33.712861    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:05:50.057950    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 node start m02 --alsologtostderr -v 5: (1m36.1324457s)
ha_test.go:430: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:430: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: (1.8923033s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (98.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.44s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (3.4373405s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (3.44s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 stop --alsologtostderr -v 5
ha_test.go:464: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 stop --alsologtostderr -v 5: (38.8918756s)
ha_test.go:469: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 start --wait true --alsologtostderr -v 5
E1217 01:08:14.122766    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 start --wait true --alsologtostderr -v 5: (2m9.7113448s)
ha_test.go:474: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (168.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (14.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 node delete m03 --alsologtostderr -v 5: (12.9890295s)
ha_test.go:495: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:495: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: (1.4598914s)
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (14.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.4913258s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (1.49s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (37.76s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 stop --alsologtostderr -v 5
E1217 01:10:22.352372    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:10:33.717542    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:533: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 stop --alsologtostderr -v 5: (37.4260158s)
ha_test.go:539: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: exit status 7 (337.2669ms)

                                                
                                                
-- stdout --
	ha-518100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-518100-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-518100-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:10:55.314808   13420 out.go:360] Setting OutFile to fd 1188 ...
	I1217 01:10:55.357360   13420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:55.357360   13420 out.go:374] Setting ErrFile to fd 1876...
	I1217 01:10:55.357360   13420 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:10:55.367517   13420 out.go:368] Setting JSON to false
	I1217 01:10:55.367517   13420 mustload.go:66] Loading cluster: ha-518100
	I1217 01:10:55.368521   13420 notify.go:221] Checking for updates...
	I1217 01:10:55.368741   13420 config.go:182] Loaded profile config "ha-518100": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:10:55.368741   13420 status.go:174] checking status of ha-518100 ...
	I1217 01:10:55.376299   13420 cli_runner.go:164] Run: docker container inspect ha-518100 --format={{.State.Status}}
	I1217 01:10:55.438732   13420 status.go:371] ha-518100 host status = "Stopped" (err=<nil>)
	I1217 01:10:55.438786   13420 status.go:384] host is not running, skipping remaining checks
	I1217 01:10:55.438786   13420 status.go:176] ha-518100 status: &{Name:ha-518100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:10:55.438819   13420 status.go:174] checking status of ha-518100-m02 ...
	I1217 01:10:55.444854   13420 cli_runner.go:164] Run: docker container inspect ha-518100-m02 --format={{.State.Status}}
	I1217 01:10:55.501686   13420 status.go:371] ha-518100-m02 host status = "Stopped" (err=<nil>)
	I1217 01:10:55.501686   13420 status.go:384] host is not running, skipping remaining checks
	I1217 01:10:55.501686   13420 status.go:176] ha-518100-m02 status: &{Name:ha-518100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:10:55.501686   13420 status.go:174] checking status of ha-518100-m04 ...
	I1217 01:10:55.507554   13420 cli_runner.go:164] Run: docker container inspect ha-518100-m04 --format={{.State.Status}}
	I1217 01:10:55.559019   13420 status.go:371] ha-518100-m04 host status = "Stopped" (err=<nil>)
	I1217 01:10:55.559019   13420 status.go:384] host is not running, skipping remaining checks
	I1217 01:10:55.559019   13420 status.go:176] ha-518100-m04 status: &{Name:ha-518100-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (37.76s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (85.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 start --wait true --alsologtostderr -v 5 --driver=docker
ha_test.go:562: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 start --wait true --alsologtostderr -v 5 --driver=docker: (1m24.1598136s)
ha_test.go:568: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:568: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: (1.4192514s)
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (85.99s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:392: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.531056s)
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (1.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (84.86s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 node add --control-plane --alsologtostderr -v 5
E1217 01:13:14.126623    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:607: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 node add --control-plane --alsologtostderr -v 5: (1m22.9339414s)
ha_test.go:613: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5
ha_test.go:613: (dbg) Done: out/minikube-windows-amd64.exe -p ha-518100 status --alsologtostderr -v 5: (1.925998s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (84.86s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (2.0052546s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (2.01s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (52.06s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-103800 --driver=docker
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-103800 --driver=docker: (52.0541507s)
--- PASS: TestImageBuild/serial/Setup (52.06s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (4.65s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-103800
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-103800: (4.6498698s)
--- PASS: TestImageBuild/serial/NormalBuild (4.65s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (2.17s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-103800
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-103800: (2.1711364s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (2.17s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (1.33s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-103800
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-103800: (1.3285383s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (1.33s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.28s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-103800
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-103800: (1.2771815s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (1.28s)

                                                
                                    
x
+
TestJSONOutput/start/Command (79.72s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-128000 --output=json --user=testUser --memory=3072 --wait=true --driver=docker
E1217 01:15:22.355510    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:15:33.720887    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-128000 --output=json --user=testUser --memory=3072 --wait=true --driver=docker: (1m19.7216469s)
--- PASS: TestJSONOutput/start/Command (79.72s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (1.07s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-128000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-128000 --output=json --user=testUser: (1.0719307s)
--- PASS: TestJSONOutput/pause/Command (1.07s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.95s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-128000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.95s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (12.12s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-128000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-128000 --output=json --user=testUser: (12.1242247s)
--- PASS: TestJSONOutput/stop/Command (12.12s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.66s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-348600 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-348600 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (200.1515ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"2b18ebad-5d9a-41fc-b941-94d54a8dc7a8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-348600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ac0b4364-d4a5-4697-b012-680876f0f9ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"6ffc143c-0356-45a0-a9c6-05b9791e745d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"70e1a147-a377-45b3-affb-9b7470fc4a07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"d6cbe3af-8771-42ff-9056-976f326c0617","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"ba7bf83c-236c-493a-aada-2e22a2f55525","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"712fe352-a8c1-407a-a851-24c46bbf12d5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:176: Cleaning up "json-output-error-348600" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-348600
--- PASS: TestErrorJSONOutput (0.66s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (54.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-025300 --network=
E1217 01:16:45.428139    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-025300 --network=: (51.0990233s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-025300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-025300
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-025300: (3.4696421s)
--- PASS: TestKicCustomNetwork/create_custom_network (54.63s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (52.67s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-network-774900 --network=bridge
E1217 01:17:57.204884    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:18:14.130607    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-network-774900 --network=bridge: (49.412652s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:176: Cleaning up "docker-network-774900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-network-774900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-network-774900: (3.1960447s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (52.67s)

                                                
                                    
x
+
TestKicExistingNetwork (54.44s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1217 01:18:32.240676    4168 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1217 01:18:32.295809    4168 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1217 01:18:32.301481    4168 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1217 01:18:32.301481    4168 cli_runner.go:164] Run: docker network inspect existing-network
W1217 01:18:32.356766    4168 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1217 01:18:32.356766    4168 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1217 01:18:32.356766    4168 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1217 01:18:32.364721    4168 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1217 01:18:32.439445    4168 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0014b0810}
I1217 01:18:32.439445    4168 network_create.go:124] attempt to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I1217 01:18:32.443355    4168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
W1217 01:18:32.499087    4168 cli_runner.go:211] docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network returned with exit code 1
W1217 01:18:32.499087    4168 network_create.go:149] failed to create docker network existing-network 192.168.49.0/24 with gateway 192.168.49.1 and mtu of 1500: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network: exit status 1
stdout:

                                                
                                                
stderr:
Error response from daemon: invalid pool request: Pool overlaps with other one on this address space
W1217 01:18:32.499087    4168 network_create.go:116] failed to create docker network existing-network 192.168.49.0/24, will retry: subnet is taken
I1217 01:18:32.529643    4168 network.go:209] skipping subnet 192.168.49.0/24 that is reserved: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:<nil>}
I1217 01:18:32.544040    4168 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000ded380}
I1217 01:18:32.544040    4168 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1217 01:18:32.548004    4168 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1217 01:18:32.691510    4168 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-windows-amd64.exe start -p existing-network-510200 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-windows-amd64.exe start -p existing-network-510200 --network=existing-network: (50.6729277s)
helpers_test.go:176: Cleaning up "existing-network-510200" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p existing-network-510200
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p existing-network-510200: (3.1385672s)
I1217 01:19:26.571439    4168 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (54.44s)

                                                
                                    
x
+
TestKicCustomSubnet (52.58s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-subnet-437700 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-subnet-437700 --subnet=192.168.60.0/24: (49.0352788s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-437700 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:176: Cleaning up "custom-subnet-437700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p custom-subnet-437700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p custom-subnet-437700: (3.486322s)
--- PASS: TestKicCustomSubnet (52.58s)

                                                
                                    
x
+
TestKicStaticIP (56.87s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe start -p static-ip-786700 --static-ip=192.168.200.200
E1217 01:20:22.359296    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:20:33.724600    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe start -p static-ip-786700 --static-ip=192.168.200.200: (52.9672469s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-windows-amd64.exe -p static-ip-786700 ip
helpers_test.go:176: Cleaning up "static-ip-786700" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p static-ip-786700
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p static-ip-786700: (3.6035967s)
--- PASS: TestKicStaticIP (56.87s)

                                                
                                    
x
+
TestMainNoArgs (0.16s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.16s)

                                                
                                    
x
+
TestMinikubeProfile (102s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-156800 --driver=docker
E1217 01:21:56.803759    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-156800 --driver=docker: (44.6224885s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-156800 --driver=docker
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-156800 --driver=docker: (47.3028007s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-156800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.181671s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-156800
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (1.1659897s)
helpers_test.go:176: Cleaning up "second-156800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-156800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-156800: (3.6206461s)
helpers_test.go:176: Cleaning up "first-156800" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-156800
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-156800: (3.6470148s)
--- PASS: TestMinikubeProfile (102.00s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (13.89s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-602300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3192429077\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-602300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3192429077\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker: (12.8886055s)
--- PASS: TestMountStart/serial/StartWithMountFirst (13.89s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-602300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.56s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (13.55s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-602300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3192429077\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker
E1217 01:23:14.134626    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
mount_start_test.go:118: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-602300 --memory=3072 --mount-string C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMountStartserial3192429077\001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker: (12.5479402s)
--- PASS: TestMountStart/serial/StartWithMountSecond (13.55s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.53s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-602300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.53s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-602300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-602300 --alsologtostderr -v=5: (2.4249824s)
--- PASS: TestMountStart/serial/DeleteFirst (2.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.57s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-602300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.57s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.86s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-602300
mount_start_test.go:196: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-602300: (1.8562384s)
--- PASS: TestMountStart/serial/Stop (1.86s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (10.85s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-602300
mount_start_test.go:207: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-602300: (9.8447961s)
--- PASS: TestMountStart/serial/RestartStopped (10.85s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.56s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-602300 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.56s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (134.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-714400 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker
E1217 01:25:22.363464    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:25:33.728749    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-714400 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker: (2m13.1609357s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (134.15s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- rollout status deployment/busybox: (3.4534081s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-svwbl -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-zklpm -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-svwbl -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-zklpm -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-svwbl -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-zklpm -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.03s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-svwbl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-svwbl -- sh -c "ping -c 1 192.168.65.254"
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-zklpm -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-714400 -- exec busybox-7b57f96db7-zklpm -- sh -c "ping -c 1 192.168.65.254"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.75s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (53.69s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-714400 -v=5 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-714400 -v=5 --alsologtostderr: (52.3837417s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr: (1.307659s)
--- PASS: TestMultiNode/serial/AddNode (53.69s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-714400 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.14s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (1.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.3929585s)
--- PASS: TestMultiNode/serial/ProfileList (1.39s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (19.19s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 status --output json --alsologtostderr: (1.281233s)
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp testdata\cp-test.txt multinode-714400:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2815052608\001\cp-test_multinode-714400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400:/home/docker/cp-test.txt multinode-714400-m02:/home/docker/cp-test_multinode-714400_multinode-714400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m02 "sudo cat /home/docker/cp-test_multinode-714400_multinode-714400-m02.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400:/home/docker/cp-test.txt multinode-714400-m03:/home/docker/cp-test_multinode-714400_multinode-714400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m03 "sudo cat /home/docker/cp-test_multinode-714400_multinode-714400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp testdata\cp-test.txt multinode-714400-m02:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2815052608\001\cp-test_multinode-714400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400-m02:/home/docker/cp-test.txt multinode-714400:/home/docker/cp-test_multinode-714400-m02_multinode-714400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400 "sudo cat /home/docker/cp-test_multinode-714400-m02_multinode-714400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400-m02:/home/docker/cp-test.txt multinode-714400-m03:/home/docker/cp-test_multinode-714400-m02_multinode-714400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m03 "sudo cat /home/docker/cp-test_multinode-714400-m02_multinode-714400-m03.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp testdata\cp-test.txt multinode-714400-m03:/home/docker/cp-test.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube4\AppData\Local\Temp\TestMultiNodeserialCopyFile2815052608\001\cp-test_multinode-714400-m03.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400-m03:/home/docker/cp-test.txt multinode-714400:/home/docker/cp-test_multinode-714400-m03_multinode-714400.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400 "sudo cat /home/docker/cp-test_multinode-714400-m03_multinode-714400.txt"
helpers_test.go:574: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 cp multinode-714400-m03:/home/docker/cp-test.txt multinode-714400-m02:/home/docker/cp-test_multinode-714400-m03_multinode-714400-m02.txt
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:552: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 ssh -n multinode-714400-m02 "sudo cat /home/docker/cp-test_multinode-714400-m03_multinode-714400-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (19.19s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.75s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 node stop m03: (1.6777046s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-714400 status: exit status 7 (1.0478598s)

                                                
                                                
-- stdout --
	multinode-714400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-714400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-714400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr: exit status 7 (1.0258329s)

                                                
                                                
-- stdout --
	multinode-714400
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-714400-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-714400-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:27:26.276989   13708 out.go:360] Setting OutFile to fd 1356 ...
	I1217 01:27:26.319858   13708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:27:26.319858   13708 out.go:374] Setting ErrFile to fd 1484...
	I1217 01:27:26.319858   13708 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:27:26.335419   13708 out.go:368] Setting JSON to false
	I1217 01:27:26.335419   13708 mustload.go:66] Loading cluster: multinode-714400
	I1217 01:27:26.335419   13708 notify.go:221] Checking for updates...
	I1217 01:27:26.336006   13708 config.go:182] Loaded profile config "multinode-714400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:27:26.336006   13708 status.go:174] checking status of multinode-714400 ...
	I1217 01:27:26.342892   13708 cli_runner.go:164] Run: docker container inspect multinode-714400 --format={{.State.Status}}
	I1217 01:27:26.399394   13708 status.go:371] multinode-714400 host status = "Running" (err=<nil>)
	I1217 01:27:26.399394   13708 host.go:66] Checking if "multinode-714400" exists ...
	I1217 01:27:26.403396   13708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-714400
	I1217 01:27:26.460408   13708 host.go:66] Checking if "multinode-714400" exists ...
	I1217 01:27:26.464412   13708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:27:26.467396   13708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-714400
	I1217 01:27:26.520981   13708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59709 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-714400\id_rsa Username:docker}
	I1217 01:27:26.649249   13708 ssh_runner.go:195] Run: systemctl --version
	I1217 01:27:26.670487   13708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:27:26.692740   13708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "8443/tcp") 0).HostPort}}'" multinode-714400
	I1217 01:27:26.749031   13708 kubeconfig.go:125] found "multinode-714400" server: "https://127.0.0.1:59708"
	I1217 01:27:26.750087   13708 api_server.go:166] Checking apiserver status ...
	I1217 01:27:26.754752   13708 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1217 01:27:26.780079   13708 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2222/cgroup
	I1217 01:27:26.793512   13708 api_server.go:182] apiserver freezer: "7:freezer:/docker/7d74b9244e40c6e324b930657d6f787c3ed76ad539838ca268be7fa4914e58c7/kubepods/burstable/pod3b8c42195a3d1e65199beee283e5158f/c6ce01e3294def64805ae4d4ed55814187c0642bab42c8cc16d2558e7dd323e1"
	I1217 01:27:26.797480   13708 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/7d74b9244e40c6e324b930657d6f787c3ed76ad539838ca268be7fa4914e58c7/kubepods/burstable/pod3b8c42195a3d1e65199beee283e5158f/c6ce01e3294def64805ae4d4ed55814187c0642bab42c8cc16d2558e7dd323e1/freezer.state
	I1217 01:27:26.812584   13708 api_server.go:204] freezer state: "THAWED"
	I1217 01:27:26.812662   13708 api_server.go:253] Checking apiserver healthz at https://127.0.0.1:59708/healthz ...
	I1217 01:27:26.825322   13708 api_server.go:279] https://127.0.0.1:59708/healthz returned 200:
	ok
	I1217 01:27:26.825384   13708 status.go:463] multinode-714400 apiserver status = Running (err=<nil>)
	I1217 01:27:26.825384   13708 status.go:176] multinode-714400 status: &{Name:multinode-714400 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:27:26.825432   13708 status.go:174] checking status of multinode-714400-m02 ...
	I1217 01:27:26.832873   13708 cli_runner.go:164] Run: docker container inspect multinode-714400-m02 --format={{.State.Status}}
	I1217 01:27:26.887082   13708 status.go:371] multinode-714400-m02 host status = "Running" (err=<nil>)
	I1217 01:27:26.887082   13708 host.go:66] Checking if "multinode-714400-m02" exists ...
	I1217 01:27:26.891699   13708 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-714400-m02
	I1217 01:27:26.948058   13708 host.go:66] Checking if "multinode-714400-m02" exists ...
	I1217 01:27:26.953244   13708 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1217 01:27:26.956411   13708 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-714400-m02
	I1217 01:27:27.008420   13708 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:59762 SSHKeyPath:C:\Users\jenkins.minikube4\minikube-integration\.minikube\machines\multinode-714400-m02\id_rsa Username:docker}
	I1217 01:27:27.126892   13708 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1217 01:27:27.147300   13708 status.go:176] multinode-714400-m02 status: &{Name:multinode-714400-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:27:27.147300   13708 status.go:174] checking status of multinode-714400-m03 ...
	I1217 01:27:27.154720   13708 cli_runner.go:164] Run: docker container inspect multinode-714400-m03 --format={{.State.Status}}
	I1217 01:27:27.208781   13708 status.go:371] multinode-714400-m03 host status = "Stopped" (err=<nil>)
	I1217 01:27:27.208781   13708 status.go:384] host is not running, skipping remaining checks
	I1217 01:27:27.208781   13708 status.go:176] multinode-714400-m03 status: &{Name:multinode-714400-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.75s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (13.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 node start m03 -v=5 --alsologtostderr: (12.0013578s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status -v=5 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 status -v=5 --alsologtostderr: (1.327707s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (13.45s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (89.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-714400
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-714400
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-714400: (24.887923s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-714400 --wait=true -v=5 --alsologtostderr
E1217 01:28:14.138017    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:326: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-714400 --wait=true -v=5 --alsologtostderr: (1m4.1199079s)
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-714400
--- PASS: TestMultiNode/serial/RestartKeepsNodes (89.30s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (8.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 node delete m03: (6.8195856s)
multinode_test.go:422: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr
multinode_test.go:422: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr: (1.1476218s)
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (8.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 stop
multinode_test.go:345: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-714400 stop: (23.424076s)
multinode_test.go:351: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-714400 status: exit status 7 (281.8615ms)

                                                
                                                
-- stdout --
	multinode-714400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-714400-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr: exit status 7 (276.4963ms)

                                                
                                                
-- stdout --
	multinode-714400
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-714400-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1217 01:29:42.119505    5752 out.go:360] Setting OutFile to fd 1636 ...
	I1217 01:29:42.162991    5752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:29:42.162991    5752 out.go:374] Setting ErrFile to fd 1680...
	I1217 01:29:42.162991    5752 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1217 01:29:42.173721    5752 out.go:368] Setting JSON to false
	I1217 01:29:42.173721    5752 mustload.go:66] Loading cluster: multinode-714400
	I1217 01:29:42.174149    5752 notify.go:221] Checking for updates...
	I1217 01:29:42.174306    5752 config.go:182] Loaded profile config "multinode-714400": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
	I1217 01:29:42.174306    5752 status.go:174] checking status of multinode-714400 ...
	I1217 01:29:42.181431    5752 cli_runner.go:164] Run: docker container inspect multinode-714400 --format={{.State.Status}}
	I1217 01:29:42.239005    5752 status.go:371] multinode-714400 host status = "Stopped" (err=<nil>)
	I1217 01:29:42.239005    5752 status.go:384] host is not running, skipping remaining checks
	I1217 01:29:42.239005    5752 status.go:176] multinode-714400 status: &{Name:multinode-714400 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1217 01:29:42.239005    5752 status.go:174] checking status of multinode-714400-m02 ...
	I1217 01:29:42.246049    5752 cli_runner.go:164] Run: docker container inspect multinode-714400-m02 --format={{.State.Status}}
	I1217 01:29:42.301487    5752 status.go:371] multinode-714400-m02 host status = "Stopped" (err=<nil>)
	I1217 01:29:42.301487    5752 status.go:384] host is not running, skipping remaining checks
	I1217 01:29:42.301487    5752 status.go:176] multinode-714400-m02 status: &{Name:multinode-714400-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (60.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-714400 --wait=true -v=5 --alsologtostderr --driver=docker
E1217 01:30:22.367840    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:30:33.732910    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:376: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-714400 --wait=true -v=5 --alsologtostderr --driver=docker: (58.9139516s)
multinode_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-714400 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (60.20s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-714400
multinode_test.go:464: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-714400-m02 --driver=docker
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-714400-m02 --driver=docker: exit status 14 (237.1591ms)

                                                
                                                
-- stdout --
	* [multinode-714400-m02] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-714400-m02' is duplicated with machine name 'multinode-714400-m02' in profile 'multinode-714400'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-714400-m03 --driver=docker
multinode_test.go:472: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-714400-m03 --driver=docker: (43.5623069s)
multinode_test.go:479: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-714400
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node add -p multinode-714400: exit status 80 (686.0495ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-714400 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-714400-m03 already exists in multinode-714400-m03 profile
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                       │
	│    * If the above advice does not help, please let us know:                                                           │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                         │
	│                                                                                                                       │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                              │
	│    * Please also attach the following file to the GitHub issue:                                                       │
	│    * - C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube_node_6ccce2fc44e3bb58d6c4f91e09ae7c7eaaf65535_21.log    │
	│                                                                                                                       │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-windows-amd64.exe delete -p multinode-714400-m03
multinode_test.go:484: (dbg) Done: out/minikube-windows-amd64.exe delete -p multinode-714400-m03: (3.5962683s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.23s)

                                                
                                    
x
+
TestPreload (159.68s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:41: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-106900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker
E1217 01:33:14.142661    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:41: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-106900 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker: (1m37.5299713s)
preload_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-106900 image pull gcr.io/k8s-minikube/busybox
preload_test.go:49: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-106900 image pull gcr.io/k8s-minikube/busybox: (2.0915059s)
preload_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-106900
E1217 01:33:25.443830    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-106900: (11.8903582s)
preload_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-106900 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker
preload_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-106900 --preload=true --alsologtostderr -v=1 --wait=true --driver=docker: (43.9500774s)
preload_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-106900 image list
helpers_test.go:176: Cleaning up "test-preload-106900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-106900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-106900: (3.7458688s)
--- PASS: TestPreload (159.68s)

                                                
                                    
x
+
TestScheduledStopWindows (116s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-252000 --memory=3072 --driver=docker
E1217 01:34:37.220490    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-252000 --memory=3072 --driver=docker: (49.7822887s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-252000 --schedule 5m
minikube stop output:

                                                
                                                
scheduled_stop_test.go:204: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-252000 -n scheduled-stop-252000
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-252000 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-252000 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-252000 --schedule 5s: (1.0929132s)
minikube stop output:

                                                
                                                
E1217 01:35:22.371944    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:35:33.737327    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-252000
scheduled_stop_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-252000: exit status 7 (220.2407ms)

                                                
                                                
-- stdout --
	scheduled-stop-252000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-252000 -n scheduled-stop-252000
scheduled_stop_test.go:189: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-252000 -n scheduled-stop-252000: exit status 7 (211.5391ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:189: status error: exit status 7 (may be ok)
helpers_test.go:176: Cleaning up "scheduled-stop-252000" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-252000
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-252000: (2.4978027s)
--- PASS: TestScheduledStopWindows (116.00s)

                                                
                                    
x
+
TestInsufficientStorage (28.43s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-windows-amd64.exe start -p insufficient-storage-298900 --memory=3072 --output=json --wait=true --driver=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p insufficient-storage-298900 --memory=3072 --output=json --wait=true --driver=docker: exit status 26 (24.6749242s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"13a98a9d-f8fe-4d97-96e4-3f1b7e343011","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-298900] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"facab139-f067-4e23-b2c0-addd17e6bede","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube4\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"90f42bc9-2af3-4e4d-8a07-6b6b392bc40f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"1f6c6986-e6d6-477d-adcc-18e3e6ecd07f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"30f96282-9fae-4f55-b568-e79f9d3a1fc8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=22168"}}
	{"specversion":"1.0","id":"be66b126-ec9d-48a8-995d-718d8e5d4171","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"a38a8442-d90d-47f7-8c66-0d5b10b43572","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"83d01534-3cd4-4d10-a82a-7982e0e3edce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"cdedf273-1686-4b7a-9068-8be9c513b56e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"f620fa02-547b-4b91-a8e2-5585c9bd9c13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker Desktop driver with root privileges"}}
	{"specversion":"1.0","id":"ef8ad281-a4d3-4d92-a8f9-73d707aaacaf","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-298900\" primary control-plane node in \"insufficient-storage-298900\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"11239600-4b22-454b-9736-e118001d41a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1765661130-22141 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"9fa41aa2-1529-4e9a-b1b0-8475d7013289","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f6463b8e-e546-4e92-827c-bbaba7aace49","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-298900 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-298900 --output=json --layout=cluster: exit status 7 (569.243ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-298900","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-298900","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 01:36:38.073792   10524 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-298900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p insufficient-storage-298900 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p insufficient-storage-298900 --output=json --layout=cluster: exit status 7 (548.97ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-298900","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-298900","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1217 01:36:38.622369    7548 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-298900" does not appear in C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	E1217 01:36:38.647583    7548 status.go:258] unable to read event log: stat: GetFileAttributesEx C:\Users\jenkins.minikube4\minikube-integration\.minikube\profiles\insufficient-storage-298900\events.json: The system cannot find the file specified.

                                                
                                                
** /stderr **
helpers_test.go:176: Cleaning up "insufficient-storage-298900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p insufficient-storage-298900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p insufficient-storage-298900: (2.6386301s)
--- PASS: TestInsufficientStorage (28.43s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (373.29s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.668498154.exe start -p running-upgrade-559900 --memory=3072 --vm-driver=docker
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.668498154.exe start -p running-upgrade-559900 --memory=3072 --vm-driver=docker: (54.47666s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-559900 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-559900 --memory=3072 --alsologtostderr -v=1 --driver=docker: (5m13.463965s)
helpers_test.go:176: Cleaning up "running-upgrade-559900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-559900
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-559900: (3.8389308s)
--- PASS: TestRunningBinaryUpgrade (373.29s)

                                                
                                    
x
+
TestMissingContainerUpgrade (233.18s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1562046125.exe start -p missing-upgrade-561100 --memory=3072 --driver=docker
version_upgrade_test.go:309: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.1562046125.exe start -p missing-upgrade-561100 --memory=3072 --driver=docker: (2m28.7998517s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-561100
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-561100: (11.1104923s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-561100
version_upgrade_test.go:329: (dbg) Run:  out/minikube-windows-amd64.exe start -p missing-upgrade-561100 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:329: (dbg) Done: out/minikube-windows-amd64.exe start -p missing-upgrade-561100 --memory=3072 --alsologtostderr -v=1 --driver=docker: (1m8.2591536s)
helpers_test.go:176: Cleaning up "missing-upgrade-561100" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p missing-upgrade-561100
helpers_test.go:179: (dbg) Done: out/minikube-windows-amd64.exe delete -p missing-upgrade-561100: (3.5030082s)
--- PASS: TestMissingContainerUpgrade (233.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:108: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker
no_kubernetes_test.go:108: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker: exit status 14 (253.8586ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-158600] minikube v1.37.0 on Microsoft Windows 10 Enterprise N 10.0.19045.6575 Build 19045.6575
	  - KUBECONFIG=C:\Users\jenkins.minikube4\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube4\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=22168
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.25s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (75.16s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:120: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:120: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --memory=3072 --alsologtostderr -v=5 --driver=docker: (1m14.4955072s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-158600 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (75.16s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (25.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker: (22.0798715s)
no_kubernetes_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe -p NoKubernetes-158600 status -o json
no_kubernetes_test.go:226: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p NoKubernetes-158600 status -o json: exit status 2 (645.0551ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-158600","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-windows-amd64.exe delete -p NoKubernetes-158600
no_kubernetes_test.go:149: (dbg) Done: out/minikube-windows-amd64.exe delete -p NoKubernetes-158600: (2.8836956s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (25.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (15.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:162: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker
no_kubernetes_test.go:162: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --no-kubernetes --cpus=1 --memory=3072 --alsologtostderr -v=5 --driver=docker: (15.2143784s)
--- PASS: TestNoKubernetes/serial/Start (15.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads
no_kubernetes_test.go:89: Checking cache directory: C:\Users\jenkins.minikube4\minikube-integration\.minikube\cache\windows\amd64\v0.0.0
--- PASS: TestNoKubernetes/serial/VerifyNok8sNoK8sDownloads (0.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-158600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-158600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (865.6746ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (3.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:195: (dbg) Run:  out/minikube-windows-amd64.exe profile list
no_kubernetes_test.go:195: (dbg) Done: out/minikube-windows-amd64.exe profile list: (1.5829292s)
no_kubernetes_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output=json
no_kubernetes_test.go:205: (dbg) Done: out/minikube-windows-amd64.exe profile list --output=json: (1.8931902s)
--- PASS: TestNoKubernetes/serial/ProfileList (3.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.32s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe stop -p NoKubernetes-158600
no_kubernetes_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe stop -p NoKubernetes-158600: (2.3171054s)
--- PASS: TestNoKubernetes/serial/Stop (2.32s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (11.6s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:217: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --driver=docker
no_kubernetes_test.go:217: (dbg) Done: out/minikube-windows-amd64.exe start -p NoKubernetes-158600 --driver=docker: (11.5986973s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (11.60s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.68s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:173: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p NoKubernetes-158600 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:173: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p NoKubernetes-158600 "sudo systemctl is-active --quiet service kubelet": exit status 1 (680.1505ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.68s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.79s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.79s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (436.25s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3461439498.exe start -p stopped-upgrade-179500 --memory=3072 --vm-driver=docker
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3461439498.exe start -p stopped-upgrade-179500 --memory=3072 --vm-driver=docker: (2m32.0138235s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3461439498.exe -p stopped-upgrade-179500 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube4\AppData\Local\Temp\minikube-v1.35.0.3461439498.exe -p stopped-upgrade-179500 stop: (10.9657249s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-179500 --memory=3072 --alsologtostderr -v=1 --driver=docker
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-179500 --memory=3072 --alsologtostderr -v=1 --driver=docker: (4m33.2700597s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (436.25s)

                                                
                                    
x
+
TestPause/serial/Start (86.29s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-446300 --memory=3072 --install-addons=false --wait=all --driver=docker
E1217 01:43:14.150828    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-446300 --memory=3072 --install-addons=false --wait=all --driver=docker: (1m26.2912601s)
--- PASS: TestPause/serial/Start (86.29s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (47.44s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-446300 --alsologtostderr -v=1 --driver=docker
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-446300 --alsologtostderr -v=1 --driver=docker: (47.4257212s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (47.44s)

                                                
                                    
x
+
TestPause/serial/Pause (1.05s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-446300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-446300 --alsologtostderr -v=5: (1.051966s)
--- PASS: TestPause/serial/Pause (1.05s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.64s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-446300 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-446300 --output=json --layout=cluster: exit status 2 (636.5901ms)

                                                
                                                
-- stdout --
	{"Name":"pause-446300","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, istio-operator","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-446300","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.64s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.86s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-446300 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.86s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.3s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-446300 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-446300 --alsologtostderr -v=5: (1.3039102s)
--- PASS: TestPause/serial/PauseAgain (1.30s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.98s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p pause-446300 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p pause-446300 --alsologtostderr -v=5: (3.9759149s)
--- PASS: TestPause/serial/DeletePaused (3.98s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (1.81s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (1.6126253s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-446300
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-446300: exit status 1 (67.4065ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-446300: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (1.81s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (91.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p auto-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker
E1217 01:45:22.380601    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:45:33.745060    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p auto-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker: (1m31.0434381s)
--- PASS: TestNetworkPlugins/group/auto/Start (91.04s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-179500
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-179500: (1.4720506s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (104.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p calico-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p calico-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker: (1m44.7641014s)
--- PASS: TestNetworkPlugins/group/calico/Start (104.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p auto-891300 "pgrep -a kubelet"
I1217 01:46:51.935158    4168 config.go:182] Loaded profile config "auto-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-891300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-zvkcj" [9993d738-1573-4270-b707-752a347bc779] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-zvkcj" [9993d738-1573-4270-b707-752a347bc779] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.0062175s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (78.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p custom-flannel-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p custom-flannel-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata\kube-flannel.yaml --driver=docker: (1m18.4427549s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (78.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:353: "calico-node-dqrdn" [634a3f00-1205-404f-adbf-0b4b481c5e53] Running / Ready:ContainersNotReady (containers with unready status: [calico-node]) / ContainersReady:ContainersNotReady (containers with unready status: [calico-node])
helpers_test.go:353: "calico-node-dqrdn" [634a3f00-1205-404f-adbf-0b4b481c5e53] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.0183875s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p calico-891300 "pgrep -a kubelet"
I1217 01:48:12.781900    4168 config.go:182] Loaded profile config "calico-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (15.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-891300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-qkkrw" [4e2988ab-3618-4df3-b427-4bff0c14352f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 01:48:14.154872    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-qkkrw" [4e2988ab-3618-4df3-b427-4bff0c14352f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 15.0056231s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (15.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (90.79s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p false-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=docker: (1m30.7874608s)
--- PASS: TestNetworkPlugins/group/false/Start (90.79s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p custom-flannel-891300 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (15.83s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-891300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-44j89" [3677bb22-3a95-4f1e-ae15-0e6877a67a94] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-44j89" [3677bb22-3a95-4f1e-ae15-0e6877a67a94] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 15.2779831s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (15.83s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (91.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kindnet-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kindnet-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker: (1m31.1575985s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (91.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (71.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p flannel-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p flannel-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker: (1m11.686812s)
--- PASS: TestNetworkPlugins/group/flannel/Start (71.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p false-891300 "pgrep -a kubelet"
I1217 01:50:04.743144    4168 config.go:182] Loaded profile config "false-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.64s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (23.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-891300 replace --force -f testdata\netcat-deployment.yaml
E1217 01:50:05.460677    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:149: (dbg) Done: kubectl --context false-891300 replace --force -f testdata\netcat-deployment.yaml: (1.6929661s)
I1217 01:50:06.583026    4168 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-jq5sw" [30e1a9f4-bdee-4d91-8882-f599a5ad17c0] Pending
helpers_test.go:353: "netcat-cd4db9dbf-jq5sw" [30e1a9f4-bdee-4d91-8882-f599a5ad17c0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 01:50:22.384869    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-jq5sw" [30e1a9f4-bdee-4d91-8882-f599a5ad17c0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 21.0084002s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (23.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:353: "kindnet-lx9zs" [3d4e68b5-8828-4fbd-bc3f-6cf33e214788] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0073223s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kindnet-891300 "pgrep -a kubelet"
I1217 01:50:45.250495    4168 config.go:182] Loaded profile config "kindnet-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (15.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-891300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-pdbpw" [2b987eea-6739-4922-9e22-75ef049cdee2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-pdbpw" [2b987eea-6739-4922-9e22-75ef049cdee2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 15.0359383s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (15.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (90.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p enable-default-cni-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p enable-default-cni-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker: (1m30.081861s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (90.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:353: "kube-flannel-ds-p65d7" [6ba5e069-3e1e-40be-ade7-b4fa1de2ab62] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.0089558s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p flannel-891300 "pgrep -a kubelet"
I1217 01:51:12.169508    4168 config.go:182] Loaded profile config "flannel-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (23.63s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-891300 replace --force -f testdata\netcat-deployment.yaml
I1217 01:51:12.748831    4168 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-4hq8j" [3e46d00d-faf1-4584-ad3d-263e8971cf1b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 01:51:17.236953    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-4hq8j" [3e46d00d-faf1-4584-ad3d-263e8971cf1b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 23.0052752s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (23.63s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (85.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p bridge-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker
E1217 01:51:52.400869    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:52.407092    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:52.418882    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:52.440193    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:52.481460    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:52.563790    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:52.725342    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:53.047110    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:53.689017    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:54.971143    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:51:57.532463    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:52:02.655494    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p bridge-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker: (1m25.5415667s)
--- PASS: TestNetworkPlugins/group/bridge/Start (85.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (90.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubenet-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker
net_test.go:112: (dbg) Done: out/minikube-windows-amd64.exe start -p kubenet-891300 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=docker: (1m30.4377589s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (90.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p enable-default-cni-891300 "pgrep -a kubelet"
E1217 01:52:33.380156    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
I1217 01:52:33.429922    4168 config.go:182] Loaded profile config "enable-default-cni-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.68s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-891300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-b4z97" [6bd2c758-b171-48e2-a1b9-a03ec81f6ee6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:353: "netcat-cd4db9dbf-b4z97" [6bd2c758-b171-48e2-a1b9-a03ec81f6ee6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 15.00654s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (15.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p bridge-891300 "pgrep -a kubelet"
I1217 01:53:03.802911    4168 config.go:182] Loaded profile config "bridge-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-891300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-rr8z7" [7cdd1dd4-fe68-4f6f-b169-32da9dc4761c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 01:53:07.188292    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.196302    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.209283    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.232291    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.275302    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.358282    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.521281    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:07.844451    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:08.487132    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:09.769780    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:12.332129    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-rr8z7" [7cdd1dd4-fe68-4f6f-b169-32da9dc4761c] Running
E1217 01:53:14.159410    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-045600\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:14.344358    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:53:17.454394    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.0069539s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (109.72s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-044000 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
E1217 01:53:27.696166    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-044000 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (1m49.7203499s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (109.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.62s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p kubenet-891300 "pgrep -a kubelet"
I1217 01:53:45.926025    4168 config.go:182] Loaded profile config "kubenet-891300": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.34.2
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (14.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-891300 replace --force -f testdata\netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:353: "netcat-cd4db9dbf-mkqhk" [68a31151-6401-4afc-819b-00dde5255d4f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1217 01:53:48.179253    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "netcat-cd4db9dbf-mkqhk" [68a31151-6401-4afc-819b-00dde5255d4f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 14.0072512s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (14.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-891300 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-891300 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.20s)
E1217 01:57:27.563392    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (84.09s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
E1217 01:54:22.335613    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:54:29.142577    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (1m24.0904278s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (84.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
E1217 01:54:42.818202    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:06.451451    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:06.458372    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:06.469800    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:06.492799    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:06.535800    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:06.618401    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:06.780738    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:07.103265    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:07.745825    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:09.028756    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:11.591199    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (1m19.0945896s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (79.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-044000 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [8d4c4554-b3e1-43c4-b180-cc0de6c23f41] Pending
helpers_test.go:353: "busybox" [8d4c4554-b3e1-43c4-b180-cc0de6c23f41] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1217 01:55:16.713520    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:16.835875    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [8d4c4554-b3e1-43c4-b180-cc0de6c23f41] Running
E1217 01:55:22.388997    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\functional-409700\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.0059524s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-044000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-044000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 01:55:23.781168    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\custom-flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p old-k8s-version-044000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.6639924s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-044000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p old-k8s-version-044000 --alsologtostderr -v=3
E1217 01:55:26.956145    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:33.753989    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\addons-401400\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p old-k8s-version-044000 --alsologtostderr -v=3: (12.1319088s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-044000 -n old-k8s-version-044000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-044000 -n old-k8s-version-044000: exit status 7 (207.981ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p old-k8s-version-044000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (47.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p old-k8s-version-044000 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0
E1217 01:55:38.667152    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:38.675160    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:38.688152    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:38.711158    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:38.754159    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:38.836548    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:38.998540    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:39.320058    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p old-k8s-version-044000 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker --kubernetes-version=v1.28.0: (46.9813225s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p old-k8s-version-044000 -n old-k8s-version-044000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (47.73s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-653800 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [73f829fc-5b7d-4ee3-91af-e19edfe3e99f] Pending
E1217 01:55:39.962432    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [73f829fc-5b7d-4ee3-91af-e19edfe3e99f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1217 01:55:41.245326    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:43.807055    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "busybox" [73f829fc-5b7d-4ee3-91af-e19edfe3e99f] Running
E1217 01:55:47.439102    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.0081634s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-653800 exec busybox -- /bin/sh -c "ulimit -n"
E1217 01:55:48.930399    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.62s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-653800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p embed-certs-653800 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.4162262s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-653800 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.62s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p embed-certs-653800 --alsologtostderr -v=3
E1217 01:55:51.066464    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\calico-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:55:59.173216    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p embed-certs-653800 --alsologtostderr -v=3: (12.4140195s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-278200 create -f testdata\busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:353: "busybox" [1a76c751-353d-49cd-9e73-34d38d0ba086] Pending
helpers_test.go:353: "busybox" [1a76c751-353d-49cd-9e73-34d38d0ba086] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:353: "busybox" [1a76c751-353d-49cd-9e73-34d38d0ba086] Running
E1217 01:56:05.615799    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:05.622442    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:05.634445    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:05.655950    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:05.697557    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:05.779831    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:05.941872    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:06.264191    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:06.906567    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:08.189467    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.0172143s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-278200 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-653800 -n embed-certs-653800
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-653800 -n embed-certs-653800: exit status 7 (211.7481ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p embed-certs-653800 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (60.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p embed-certs-653800 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker --kubernetes-version=v1.34.2: (59.4657381s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p embed-certs-653800 -n embed-certs-653800
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (60.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E1217 01:56:10.751081    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:203: (dbg) Done: out/minikube-windows-amd64.exe addons enable metrics-server -p default-k8s-diff-port-278200 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.4873292s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-278200 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.70s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-278200 --alsologtostderr -v=3
E1217 01:56:15.873996    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:56:19.656228    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\kindnet-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p default-k8s-diff-port-278200 --alsologtostderr -v=3: (12.4979234s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200: exit status 7 (217.9957ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p default-k8s-diff-port-278200 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.83s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe start -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe start -p default-k8s-diff-port-278200 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker --kubernetes-version=v1.34.2: (1m0.1744823s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (60.83s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wgqv4" [10a813a3-5e35-48ff-b8a1-153b3d1cc0da] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
E1217 01:56:26.116994    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\flannel-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wgqv4" [10a813a3-5e35-48ff-b8a1-153b3d1cc0da] Running
E1217 01:56:28.401346    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\false-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 8.0081412s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (8.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-8694d4445c-wgqv4" [10a813a3-5e35-48ff-b8a1-153b3d1cc0da] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0061265s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-044000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.45s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p old-k8s-version-044000 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (5.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p old-k8s-version-044000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p old-k8s-version-044000 --alsologtostderr -v=1: (1.2240416s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-044000 -n old-k8s-version-044000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-044000 -n old-k8s-version-044000: exit status 2 (646.77ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-044000 -n old-k8s-version-044000
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-044000 -n old-k8s-version-044000: exit status 2 (657.11ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p old-k8s-version-044000 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p old-k8s-version-044000 --alsologtostderr -v=1: (1.0233455s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p old-k8s-version-044000 -n old-k8s-version-044000
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p old-k8s-version-044000 -n old-k8s-version-044000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (5.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-8whhq" [e9fa7cf1-f283-4ad4-a7c7-d05d5a6bd880] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0217168s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-8whhq" [e9fa7cf1-f283-4ad4-a7c7-d05d5a6bd880] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0094638s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-653800 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p embed-certs-653800 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (5.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p embed-certs-653800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p embed-certs-653800 --alsologtostderr -v=1: (1.1965746s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-653800 -n embed-certs-653800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-653800 -n embed-certs-653800: exit status 2 (615.3269ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-653800 -n embed-certs-653800
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-653800 -n embed-certs-653800: exit status 2 (654.2775ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p embed-certs-653800 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe unpause -p embed-certs-653800 --alsologtostderr -v=1: (1.0746021s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p embed-certs-653800 -n embed-certs-653800
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p embed-certs-653800 -n embed-certs-653800
E1217 01:57:20.112119    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\auto-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (5.14s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-kmrtn" [6be5644f-4168-472a-b8a3-38f3667f0aa1] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.0050623s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:353: "kubernetes-dashboard-855c9754f9-kmrtn" [6be5644f-4168-472a-b8a3-38f3667f0aa1] Running
E1217 01:57:33.934313    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:33.941198    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:33.952366    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:33.973542    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:34.015629    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:34.096945    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:34.259069    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:34.581112    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:35.224124    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
E1217 01:57:36.507431    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.0072172s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-278200 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p default-k8s-diff-port-278200 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.46s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-278200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-windows-amd64.exe pause -p default-k8s-diff-port-278200 --alsologtostderr -v=1: (1.1430762s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200
E1217 01:57:39.069760    4168 cert_rotation.go:172] "Loading client cert failed" err="open C:\\Users\\jenkins.minikube4\\minikube-integration\\.minikube\\profiles\\enable-default-cni-891300\\client.crt: The system cannot find the path specified." logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200: exit status 2 (634.5543ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200: exit status 2 (620.1028ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p default-k8s-diff-port-278200 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Kubelet}} -p default-k8s-diff-port-278200 -n default-k8s-diff-port-278200
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.93s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (1.87s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p no-preload-184000 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p no-preload-184000 --alsologtostderr -v=3: (1.8669426s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (1.87s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.55s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p no-preload-184000 -n no-preload-184000: exit status 7 (222.1264ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p no-preload-184000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.55s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-windows-amd64.exe stop -p newest-cni-383500 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-windows-amd64.exe stop -p newest-cni-383500 --alsologtostderr -v=3: (1.8821359s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.88s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.52s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p newest-cni-383500 -n newest-cni-383500: exit status 7 (203.165ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p newest-cni-383500 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-windows-amd64.exe -p newest-cni-383500 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.46s)

                                                
                                    

Test skip (35/427)

Order skiped test Duration
5 TestDownloadOnly/v1.28.0/cached-images 0
6 TestDownloadOnly/v1.28.0/binaries 0
14 TestDownloadOnly/v1.34.2/cached-images 0
15 TestDownloadOnly/v1.34.2/binaries 0
23 TestDownloadOnly/v1.35.0-beta.0/cached-images 0
24 TestDownloadOnly/v1.35.0-beta.0/binaries 0
42 TestAddons/serial/GCPAuth/RealCredentials 0
44 TestAddons/parallel/Registry 26.67
46 TestAddons/parallel/Ingress 29.44
49 TestAddons/parallel/Olm 0
63 TestDockerEnvContainerd 0
64 TestHyperKitDriverInstallOrUpdate 0
65 TestHyperkitDriverSkipUpgrade 0
99 TestFunctional/parallel/DashboardCmd 300.01
103 TestFunctional/parallel/MountCmd 0
106 TestFunctional/parallel/ServiceCmdConnect 52.3
117 TestFunctional/parallel/PodmanEnv 0
149 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
150 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0
151 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
152 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0
192 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd 0.53
196 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd 0
210 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv 0
227 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect 0
228 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig 0
229 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0
230 TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS 0
257 TestGvisorAddon 0
286 TestImageBuild/serial/validateImageBuildWithBuildEnv 0
287 TestISOImage 0
354 TestScheduledStopUnix 0
355 TestSkaffold 0
374 TestNetworkPlugins/group/cilium 9.78
391 TestStartStop/group/disable-driver-mounts 0.6
x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.2/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.35.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.35.0-beta.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.35.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:761: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Registry (26.67s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:384: registry stabilized in 9.0672ms
addons_test.go:386: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-6b586f9694-vcdxj" [2950beda-63db-4c84-a1a1-973957cef011] Running
addons_test.go:386: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.0054967s
addons_test.go:389: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:353: "registry-proxy-ng82c" [1045f157-d84c-4df5-b8a4-259b54b14443] Running
addons_test.go:389: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.007276s
addons_test.go:394: (dbg) Run:  kubectl --context addons-401400 delete po -l run=registry-test --now
addons_test.go:399: (dbg) Run:  kubectl --context addons-401400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:399: (dbg) Done: kubectl --context addons-401400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (13.5623234s)
addons_test.go:409: Unable to complete rest of the test due to connectivity assumptions
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable registry --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable registry --alsologtostderr -v=1: (1.8758791s)
--- SKIP: TestAddons/parallel/Registry (26.67s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (29.44s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:211: (dbg) Run:  kubectl --context addons-401400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:236: (dbg) Run:  kubectl --context addons-401400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:249: (dbg) Run:  kubectl --context addons-401400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:353: "nginx" [9beaaefe-d92b-4829-ae32-03e6bda3731c] Pending
helpers_test.go:353: "nginx" [9beaaefe-d92b-4829-ae32-03e6bda3731c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:353: "nginx" [9beaaefe-d92b-4829-ae32-03e6bda3731c] Running
addons_test.go:254: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.019836s
I1217 00:12:00.479141    4168 kapi.go:150] Service nginx in namespace default found.
addons_test.go:266: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: skipping ingress DNS test for any combination that needs port forwarding
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable ingress-dns --alsologtostderr -v=1: (3.1995808s)
addons_test.go:1055: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-401400 addons disable ingress --alsologtostderr -v=1
addons_test.go:1055: (dbg) Done: out/minikube-windows-amd64.exe -p addons-401400 addons disable ingress --alsologtostderr -v=1: (10.1610526s)
--- SKIP: TestAddons/parallel/Ingress (29.44s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:485: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker true windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-045600 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-045600 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 11268: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (52.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-045600 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-045600 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:353: "hello-node-connect-7d85dfc575-cfq4q" [953f5507-6113-4eaf-8fe9-ae827cf679dd] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:353: "hello-node-connect-7d85dfc575-cfq4q" [953f5507-6113-4eaf-8fe9-ae827cf679dd] Running
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 52.0067998s
functional_test.go:1651: test is broken for port-forwarded drivers: https://github.com/kubernetes/minikube/issues/7383
--- SKIP: TestFunctional/parallel/ServiceCmdConnect (52.30s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.53s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-409700 --alsologtostderr -v=1]
functional_test.go:931: output didn't produce a URL
functional_test.go:925: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-409700 --alsologtostderr -v=1] ...
helpers_test.go:520: unable to terminate pid 11944: Access is denied.
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/DashboardCmd (0.53s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd
functional_test_mount_test.go:64: skipping: mount broken on windows: https://github.com/kubernetes/minikube/issues/8303
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
=== PAUSE TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctionalNewestKubernetes/Versionv1.35.0-beta.0/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (9.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:615: 
----------------------- debugLogs start: cilium-891300 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-891300" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-891300

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-891300" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-891300"

                                                
                                                
----------------------- debugLogs end: cilium-891300 [took: 9.3361805s] --------------------------------
helpers_test.go:176: Cleaning up "cilium-891300" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cilium-891300
--- SKIP: TestNetworkPlugins/group/cilium (9.78s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.6s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:176: Cleaning up "disable-driver-mounts-536900" profile ...
helpers_test.go:179: (dbg) Run:  out/minikube-windows-amd64.exe delete -p disable-driver-mounts-536900
--- SKIP: TestStartStop/group/disable-driver-mounts (0.60s)

                                                
                                    
Copied to clipboard